5 回答
TA贡献1946条经验 获得超4个赞
在此示例中,我将使用 C++ OpenCV 库和 Visual Studio 2017,我将尝试捕获 ARCore 相机图像,将其移动到 OpenCV(尽可能高效),将其转换为 RGB 颜色空间,然后将其移回 Unity C# 代码并将其保存在手机的内存中。
首先,我们必须创建一个 C++ 动态库项目以与 OpenCV 一起使用。为此,我强烈建议遵循Pierre Baret 和 Ninjaman494 对这个问题的回答:OpenCV + Android + Unity。这个过程相当简单,如果你不会过多地偏离他们的答案(即你可以安全地下载比 3.3.1 版本更新的 OpenCV,但在为 ARM64 而不是 ARM 等编译时要小心),你应该能够从 C# 调用 C++ 函数。
根据我的经验,我必须解决两个问题 - 首先,如果您将项目作为 C# 解决方案的一部分而不是创建新的解决方案,Visual Studio 将继续扰乱您的配置,例如尝试编译 x86 版本而不是 ARM版本。为了省去麻烦,创建一个完全独立的解决方案。另一个问题是某些函数无法为我链接,从而引发未定义的引用链接器错误(undefined reference to 'cv::error(int, std::string const&, char const*, char const*, int准确地说)。如果发生这种情况并且问题出在您并不真正需要的函数上,只需在您的代码中重新创建该函数 - 例如,如果您遇到问题cv::error,请将此代码添加到您的 .cpp 文件的末尾:
namespace cv {
__noreturn void error(int a, const String & b, const char * c, const char * d, int e) {
throw std::string(b);
}
}
当然,这是丑陋和肮脏的做事方式,所以如果您知道如何修复链接器错误,请这样做并告诉我。
现在,您应该有一个可以编译并可以从 Unity Android 应用程序运行的工作 C++ 代码。但是,我们希望 OpenCV 不返回数字,而是转换图像。因此,将您的代码更改为:
.h文件
extern "C" {
namespace YOUR_OWN_NAMESPACE
{
int ConvertYUV2RGBA(unsigned char *, unsigned char *, int, int);
}
}
.cpp 文件
extern "C" {
int YOUR_OWN_NAMESPACE::ConvertYUV2RGBA(unsigned char * inputPtr, unsigned char * outputPtr, int width, int height) {
// Create Mat objects for the YUV and RGB images. For YUV, we need a
// height*1.5 x width image, that has one 8-bit channel. We can also tell
// OpenCV to have this Mat object "encapsulate" an existing array,
// which is inputPtr.
// For RGB image, we need a height x width image, that has three 8-bit
// channels. Again, we tell OpenCV to encapsulate the outputPtr array.
// Thanks to specifying existing arrays as data sources, no copying
// or memory allocation has to be done, and the process is highly
// effective.
cv::Mat input_image(height + height / 2, width, CV_8UC1, inputPtr);
cv::Mat output_image(height, width, CV_8UC3, outputPtr);
// If any of the images has not loaded, return 1 to signal an error.
if (input_image.empty() || output_image.empty()) {
return 1;
}
// Convert the image. Now you might have seen people telling you to use
// NV21 or 420sp instead of NV12, and BGR instead of RGB. I do not
// understand why, but this was the correct conversion for me.
// If you have any problems with the color in the output image,
// they are probably caused by incorrect conversion. In that case,
// I can only recommend you the trial and error method.
cv::cvtColor(input_image, output_image, cv::COLOR_YUV2RGB_NV12);
// Now that the result is safely saved in outputPtr, we can return 0.
return 0;
}
}
现在,重建解决方案 ( Ctrl + Shift + B) 并将libProjectName.so文件复制到 Unity 的Plugins/Android文件夹,如链接答案中所示。
下一步是从 ARCore 保存图像,将其移动到 C++ 代码,然后取回它。让我们在 C# 脚本的类中添加它:
[DllImport("YOUR_OWN_NAMESPACE")]
public static extern int ConvertYUV2RGBA(IntPtr input, IntPtr output, int width, int height);
Visual Studio 将提示您添加System.Runtime.InteropServicesusing 子句 - 这样做。这允许我们在 C# 代码中使用 C++ 函数。现在,让我们将这个函数添加到我们的 C# 组件中:
public Texture2D CameraToTexture()
{
// Create the object for the result - this has to be done before the
// using {} clause.
Texture2D result;
// Use using to make sure that C# disposes of the CameraImageBytes afterwards
using (CameraImageBytes camBytes = Frame.CameraImage.AcquireCameraImageBytes())
{
// If acquiring failed, return null
if (!camBytes.IsAvailable)
{
Debug.LogWarning("camBytes not available");
return null;
}
// To save a YUV_420_888 image, you need 1.5*pixelCount bytes.
// I will explain later, why.
byte[] YUVimage = new byte[(int)(camBytes.Width * camBytes.Height * 1.5f)];
// As CameraImageBytes keep the Y, U and V data in three separate
// arrays, we need to put them in a single array. This is done using
// native pointers, which are considered unsafe in C#.
unsafe
{
for (int i = 0; i < camBytes.Width * camBytes.Height; i++)
{
YUVimage[i] = *((byte*)camBytes.Y.ToPointer() + (i * sizeof(byte)));
}
for (int i = 0; i < camBytes.Width * camBytes.Height / 4; i++)
{
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i] = *((byte*)camBytes.U.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i + 1] = *((byte*)camBytes.V.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
}
}
// Create the output byte array. RGB is three channels, therefore
// we need 3 times the pixel count
byte[] RGBimage = new byte[camBytes.Width * camBytes.Height * 3];
// GCHandles help us "pin" the arrays in the memory, so that we can
// pass them to the C++ code.
GCHandle YUVhandle = GCHandle.Alloc(YUVimage, GCHandleType.Pinned);
GCHandle RGBhandle = GCHandle.Alloc(RGBimage, GCHandleType.Pinned);
// Call the C++ function that we created.
int k = ConvertYUV2RGBA(YUVhandle.AddrOfPinnedObject(), RGBhandle.AddrOfPinnedObject(), camBytes.Width, camBytes.Height);
// If OpenCV conversion failed, return null
if (k != 0)
{
Debug.LogWarning("Color conversion - k != 0");
return null;
}
// Create a new texture object
result = new Texture2D(camBytes.Width, camBytes.Height, TextureFormat.RGB24, false);
// Load the RGB array to the texture, send it to GPU
result.LoadRawTextureData(RGBimage);
result.Apply();
// Save the texture as an PNG file. End the using {} clause to
// dispose of the CameraImageBytes.
File.WriteAllBytes(Application.persistentDataPath + "/tex.png", result.EncodeToPNG());
}
// Return the texture.
return result;
}
为了能够运行unsafe代码,您还需要在 Unity 中允许它。转到播放器设置(Edit > Project Settings > Player Settings并选中Allow unsafe code复选框。)
现在,您可以调用 CameraToTexture() 函数,比方说,每 5 秒从 Update() 调用一次,相机图像应保存为/Android/data/YOUR_APPLICATION_PACKAGE/files/tex.png. 图像可能是横向的,即使您将手机置于纵向模式,但这也不再那么难以修复。此外,您可能会注意到每次保存图像时都会冻结 - 因此,我建议在单独的线程中调用此函数。此外,这里最苛刻的操作是将图像保存为 PNG 文件,因此如果您出于任何其他原因需要它,应该没问题(不过仍然使用单独的线程)。
如果您想了解 YUV_420_888 格式,为什么需要 1.5*pixelCount 数组,以及为什么我们按照我们的方式修改数组,请阅读https://wiki.videolan.org/YUV/#NV12。其他网站似乎没有关于此格式如何工作的不正确信息。
另外,如果您有任何问题,请随时给我评论,我会尽力帮助解决这些问题,以及对代码和答案的任何反馈。
附录 1:根据https://docs.unity3d.com/ScriptReference/Texture2D.LoadRawTextureData.html,您应该使用 GetRawTextureData 而不是 LoadRawTextureData,以防止复制。为此,只需固定 GetRawTextureData 返回的数组而不是 RGBimage 数组(您可以将其删除)。另外,不要忘记调用 result.Apply(); 然后。
附录 2:不要忘记在使用完两个 GCHandle 时调用 Free()。
TA贡献1864条经验 获得超2个赞
这是一个仅使用免费插件 OpenCV Plus Unity 的实现。如果您熟悉 OpenCV,则设置非常简单,文档也很棒。
此实现使用 OpenCV 正确旋转图像,将它们存储到内存中,并在退出应用程序时将它们保存到文件中。我试图从代码中剥离所有 Unity 方面,以便函数 GetCameraImage() 可以在单独的线程上运行。
我可以确认它可以在 Andoird (GS7) 上运行,我认为它可以普遍运行。
using System;
using System.Collections.Generic;
using GoogleARCore;
using UnityEngine;
using OpenCvSharp;
using System.Runtime.InteropServices;
public class CamImage : MonoBehaviour
{
public static List<Mat> AllData = new List<Mat>();
public static void GetCameraImage()
{
// Use using to make sure that C# disposes of the CameraImageBytes afterwards
using (CameraImageBytes camBytes = Frame.CameraImage.AcquireCameraImageBytes())
{
// If acquiring failed, return null
if (!camBytes.IsAvailable)
{
return;
}
// To save a YUV_420_888 image, you need 1.5*pixelCount bytes.
// I will explain later, why.
byte[] YUVimage = new byte[(int)(camBytes.Width * camBytes.Height * 1.5f)];
// As CameraImageBytes keep the Y, U and V data in three separate
// arrays, we need to put them in a single array. This is done using
// native pointers, which are considered unsafe in C#.
unsafe
{
for (int i = 0; i < camBytes.Width * camBytes.Height; i++)
{
YUVimage[i] = *((byte*)camBytes.Y.ToPointer() + (i * sizeof(byte)));
}
for (int i = 0; i < camBytes.Width * camBytes.Height / 4; i++)
{
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i] = *((byte*)camBytes.U.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i + 1] = *((byte*)camBytes.V.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
}
}
// GCHandles help us "pin" the arrays in the memory, so that we can
// pass them to the C++ code.
GCHandle pinnedArray = GCHandle.Alloc(YUVimage, GCHandleType.Pinned);
IntPtr pointerYUV = pinnedArray.AddrOfPinnedObject();
Mat input = new Mat(camBytes.Height + camBytes.Height / 2, camBytes.Width, MatType.CV_8UC1, pointerYUV);
Mat output = new Mat(camBytes.Height, camBytes.Width, MatType.CV_8UC3);
Cv2.CvtColor(input, output, ColorConversionCodes.YUV2BGR_NV12);// YUV2RGB_NV12);
// FLIP AND TRANPOSE TO VERTICAL
Cv2.Transpose(output, output);
Cv2.Flip(output, output, FlipMode.Y);
AllData.Add(output);
pinnedArray.Free();
}
}
}
然后我在退出程序时调用 ExportImages() 以保存到文件。
private void ExportImages()
{
/// Write Camera intrinsics to text file
var path = Application.persistentDataPath;
StreamWriter sr = new StreamWriter(path + @"/intrinsics.txt");
sr.WriteLine(CameraIntrinsicsOutput.text);
Debug.Log(CameraIntrinsicsOutput.text);
sr.Close();
// Loop through Mat List, Add to Texture and Save.
for (var i = 0; i < CamImage.AllData.Count; i++)
{
Mat imOut = CamImage.AllData[i];
Texture2D result = Unity.MatToTexture(imOut);
result.Apply();
byte[] im = result.EncodeToJPG(100);
string fileName = "/IMG" + i + ".jpg";
File.WriteAllBytes(path + fileName, im);
string messge = "Succesfully Saved Image To " + path + "\n";
Debug.Log(messge);
Destroy(result);
}
}
TA贡献1797条经验 获得超6个赞
我想出了如何在 Arcore 1.8 中获得全分辨率 CPU 图像。
我现在可以使用 cameraimagebytes 获得完整的相机分辨率。
把这个放在你的类变量中:
private ARCoreSession.OnChooseCameraConfigurationDelegate m_OnChoseCameraConfiguration = null;
把这个放在 Start()
m_OnChoseCameraConfiguration = _ChooseCameraConfiguration; ARSessionManager.RegisterChooseCameraConfigurationCallback(m_OnChoseCameraConfiguration); ARSessionManager.enabled = false; ARSessionManager.enabled = true;
将此回调添加到类中:
private int _ChooseCameraConfiguration(List<CameraConfig> supportedConfigurations) { return supportedConfigurations.Count - 1; }
一旦你添加了这些,你应该有 cameraimagebytes 返回相机的完整分辨率。
TA贡献1898条经验 获得超8个赞
对于想要使用 OpencvForUnity 尝试此操作的每个人:
public Mat getCameraImage()
{
// Use using to make sure that C# disposes of the CameraImageBytes afterwards
using (CameraImageBytes camBytes = Frame.CameraImage.AcquireCameraImageBytes())
{
// If acquiring failed, return null
if (!camBytes.IsAvailable)
{
Debug.LogWarning("camBytes not available");
return null;
}
// To save a YUV_420_888 image, you need 1.5*pixelCount bytes.
// I will explain later, why.
byte[] YUVimage = new byte[(int)(camBytes.Width * camBytes.Height * 1.5f)];
// As CameraImageBytes keep the Y, U and V data in three separate
// arrays, we need to put them in a single array. This is done using
// native pointers, which are considered unsafe in C#.
unsafe
{
for (int i = 0; i < camBytes.Width * camBytes.Height; i++)
{
YUVimage[i] = *((byte*)camBytes.Y.ToPointer() + (i * sizeof(byte)));
}
for (int i = 0; i < camBytes.Width * camBytes.Height / 4; i++)
{
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i] = *((byte*)camBytes.U.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i + 1] = *((byte*)camBytes.V.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
}
}
// Create the output byte array. RGB is three channels, therefore
// we need 3 times the pixel count
byte[] RGBimage = new byte[camBytes.Width * camBytes.Height * 3];
// GCHandles help us "pin" the arrays in the memory, so that we can
// pass them to the C++ code.
GCHandle pinnedArray = GCHandle.Alloc(YUVimage, GCHandleType.Pinned);
IntPtr pointer = pinnedArray.AddrOfPinnedObject();
Mat input = new Mat(camBytes.Height + camBytes.Height / 2, camBytes.Width, CvType.CV_8UC1);
Mat output = new Mat(camBytes.Height, camBytes.Width, CvType.CV_8UC3);
Utils.copyToMat(pointer, input);
Imgproc.cvtColor(input, output, Imgproc.COLOR_YUV2RGB_NV12);
pinnedArray.Free();
return output;
}
TA贡献1840条经验 获得超5个赞
看来你已经解决了这个问题。
但对于任何想要将 AR 与手势识别和跟踪相结合的人,请尝试 Manomotion:https ://www.manomotion.com/
免费 SDK 并在 12/2020 中完美运行。
使用SDK社区版和下载ARFoundation版本
- 5 回答
- 0 关注
- 146 浏览
添加回答
举报