I used openCV camera to track the objects. The code is as below and defaultAVCaptureVideoOrientation is set to landscape RIGHT. Now if i rotate the device to landscape LEFT, the camera is not rotated correctly. So i used layoutPreviewLayer to set the camera rotation but it is not working.
self.videoCamera = [[VideoCamera alloc] initWithParentView:imgView];
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationLandscapeRight;
self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionBack;
self.videoCamera.rotateVideo = YES;
self.videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPreset640x480;
self.videoCamera.defaultFPS = 30;
self.videoCamera.delegate = self;
self.videoCamera.recordVideo = NO;
self.videoCamera.grayscaleMode = NO;
Using the below code the landscape RIGHT is working fine. But if i rotate the iphoneXR device to landscape LEFT, it is not working.
@implementation VideoCamera
- (void)updateOrientation
{
self->customPreviewLayer.bounds = CGRectMake(0, 0, self.parentView.frame.size.width, self.parentView.frame.size.height);
[self layoutPreviewLayer];
}
- (void)layoutPreviewLayer
{
if (self.parentView != nil)
{
CALayer* layer = self->customPreviewLayer;
CGRect bounds = self->customPreviewLayer.bounds;
int rotation_angle = 0;
switch (self.defaultAVCaptureVideoOrientation) {
case AVCaptureVideoOrientationLandscapeRight:
rotation_angle = -180;
break;
case AVCaptureVideoOrientationLandscapeLeft:
rotation_angle = 180;
break;
default:
break;
}
layer.position = CGPointMake(self.parentView.frame.size.width/2., self.parentView.frame.size.height/2.);
layer.affineTransform = CGAffineTransformMakeRotation( DEGREES_RADIANS(rotation_angle) );
layer.bounds = bounds;
}
}
@end
↧
OpenCV Camera rotation landscape left and right in iOS Swift
↧
CUDA headers still appear even if -DWithCUDA=OFF ?
- OpenCV => 3.4.11
- Operating System / Platform => OSX 10.15.4
I am wondering whether cuda related functionalities are useful for using opencv in ios projects and I suspect not. When compiling with
```
python opencv/platforms/ios/build_framework.py ios \
--without gpu --without contrib --without dnn --without highgui \
--without legacy --without ml --without nonfree --without objdetect \
--without photo --without stitching --without video --without videoio \
--without videostab --without flann --without dnn --without calib3d \
--without features2d --without gapi --without java_bindings_generator \
--without imgcodecs --iphoneos_archs=arm64 --dynamic \
--without world --disable-bitcode
```
I can still see cuda related headers are included in the output. Cuda related headers still appear even if I manually added `-DWITH_CUDA=OFF` in `opencv/platforms/ios/build_framework.py`
↧
↧
build opencv for ios failed
Hello, I am working on macos, I followed the instructions to build opencv for ios found here
https://docs.opencv.org/2.4/doc/tutorials/introduction/ios_install/ios_install.html
The build fails with the following error:
** BUILD FAILED **
The following build commands failed:
Check dependencies
(1 failure)
============================================================
ERROR: Command '['xcodebuild', 'BITCODE_GENERATION_MODE=bitcode', 'IPHONEOS_DEPLOYMENT_TARGET=8.0', 'ARCHS=armv7', '-sdk', 'iphoneos', '-configuration', 'Release', '-parallelizeTargets', '-jobs', '4', '-target', 'ALL_BUILD', 'build']' returned non-zero exit status 65
============================================================
Traceback (most recent call last):
File "opencv/platforms/ios/build_framework.py", line 137, in build
self._build(outdir)
File "opencv/platforms/ios/build_framework.py", line 123, in _build
self.buildOne(t[0], t[1], mainBD, cmake_flags)
File "opencv/platforms/ios/build_framework.py", line 256, in buildOne
execute(buildcmd + ["-target", "ALL_BUILD", "build"], cwd = builddir + "/modules/objc/framework_build")
File "opencv/platforms/ios/build_framework.py", line 40, in execute
retcode = check_call(cmd, cwd = cwd)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 190, in check_call
raise CalledProcessError(retcode, cmd)
CalledProcessError: Command '['xcodebuild', 'BITCODE_GENERATION_MODE=bitcode', 'IPHONEOS_DEPLOYMENT_TARGET=8.0', 'ARCHS=armv7', '-sdk', 'iphoneos', '-configuration', 'Release', '-parallelizeTargets', '-jobs', '4', '-target', 'ALL_BUILD', 'build']' returned non-zero exit status 65
If I give that xcodebuild command manually from terminal with the arguments above, it does complain the syntax is wrong. I somehow doubt the syntax error is just a bug in the installer or there would be a lot more complaints, so I assume something's wrong on my side, but I can't think what. Any ideas? Thank you.
↧
App crashing when stitching photos from video capture
I am working on a planar image stitching feature, and I am using the CvVideoCamera to collect all of the frames as Mats. When finished with collecting the frames, the photos are stitched together, but I keep getting this error:
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(3.4.2) /Volumes/build-storage/build/3_4_iOS-mac/opencv/modules/core/src/umatrix.cpp:297: error: (-215:Assertion failed) s >= 0 in function 'setSize'
Is there anything extra I need to do with the Mats before sending them to the stitcher? has anyone else experienced this issue?
###Update
The app is crashing when I use certain properties. The properties I am trying to use are OrbFeaturesFinder and PlaneWarper. Using either of these causes a crash. Is there any reason anyone knows that is specific to iOS?
↧
Converting cv::Mat to byte array [] in objective-c++
Hi am Prabh new with OpenCV and working with tModel in iOS objective-c++. I have successfully converted the Camera frame to image cv::Mat and want to convert this cv::Mat to bytes array. Currently stuck here from last two days please anyone help me . Thanks in Advance
cv::Mat srcMat = cv::Mat(points,false);
cv::Mat destMat = cv::Mat(desPoints);
cv::Mat transferMat = cv::getPerspectiveTransform(srcMat, destMat);
cv::warpPerspective(imageMat,destImageMat,transferMat,destImageMat.size());
cv::cvtColor(destImageMat, destImageMat,cv::COLOR_RGBA2GRAY);
cv::adaptiveThreshold(destImageMat, destImageMat, 255, cv::ADAPTIVE_THRESH_MEAN_C, cv::THRESH_BINARY, 15, 15);
cv::flip(destImageMat,destImageMat, -1);
UIImage *destImage1 = MatToUIImage(destImageMat);
Now the next step is convert this "destImageMat" to Bytes Array.
↧
↧
OpenCV iOS - Camera Selection
Hi everyone,
Apologies if this has been answered already, as I did try to search but couldn't find a relevant thread.
Working on a PPG-based project using OpenCV iOS. Has been fine but noticed a particular issue with the iPhone 8S in particular, a problem that may be easily solved if we can select the secondary back camera.
However we can't seem to find if there is an option to select which cameras to use in OpenCV, if possible.
↧
how do i integrate the openCV into nativescript
anyone can help me to how to integrate the openCV into my app using the nativescript to produce iOS app or android?
regards
↧
iOS 12 + Mac OS X Mojave EXC_BAD_ACCESS on cvtColor
I've been trying to integrate OpenCV into my iOS project, but have been having trouble with a null pointer exception. I installed version 3.4.2 using Cocoapods, and the following code snippet works fine:
#import
#import
#import
#import "OpenCVWrapper.h"
@implementation OpenCVWrapper
+ (UIImage *) process:(UIImage *) image {
cv::Mat m;
UIImageToMat(image, m);
// this works fine
return MatToUIImage(m);
}
But when I try to do a simple color conversion (or any type of manipulation for that matter) I get an EXC_BAD_ACCESS and it crashes my app
#import
#import
#import
#import "OpenCVWrapper.h"
@implementation OpenCVWrapper
+ (UIImage *) process:(UIImage *) image {
cv::Mat m;
UIImageToMat(image, m);
cv::Mat gray;
cv::cvtColor(m, gray, cv::COLOR_BGR2GRAY); // crashes here
return MatToUIImage(gray);
}
I tried version 2 of opencv and tried stepping through the assembly, but wasn't able to figure out what was causing the problem.
↧
Augmented reality with Aruco and SceneKit
I'm trying to make augmented reality demo project with simple 3d object in the center of marker. I need to make it with `OpenCV` and `SceneKit`.
My steps are:
- I obtain the corners of marker using `Aruco`
- with `cv::solvePnP` get the `tvec` and `rvec`.
- convert the `tvec` and `rvec` from `OpenCv's Coordinate System` to the `SceneKit Coordinate System`.
- apply the converted rotation and translation to the camera node.
The Problem is:
- The object is not centered on the marker. Rotation of object looks good. But is not positioned where it should be.
SolvePnp code:
cv::Mat intrinMat(3,3,cv::DataType::type);
//From ARKit (ARFrame camera.intrinsics) - iphone 6s plus
intrinMat.at(0,0) = 1662.49;
intrinMat.at(0,1) = 0.0;
intrinMat.at(0,2) = 0.0;
intrinMat.at(1,0) = 0.0;
intrinMat.at(1,1) = 1662.49;
intrinMat.at(1,2) = 0.0;
intrinMat.at(2,0) = 960.0 / 2;
intrinMat.at(2,1) = 540.0 / 2;
intrinMat.at(2,2) = 0.0;
double marker_dim = 3;
cv::Ptr dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
CGFloat width = CVPixelBufferGetWidth(pixelBuffer);
CGFloat height = CVPixelBufferGetHeight(pixelBuffer);
cv::Mat mat(height, width, CV_8UC1, baseaddress, 0); //CV_8UC1
cv::rotate(mat, mat, cv::ROTATE_90_CLOCKWISE);
std::vector ids;
std::vector> corners;
cv::aruco::detectMarkers(mat,dictionary,corners,ids);
if(ids.size() > 0) {
cv::Mat colorMat;
cv::cvtColor(mat, colorMat, CV_GRAY2RGB);
cv::aruco::drawDetectedMarkers(colorMat, corners, ids, cv::Scalar(0,255,24));
cv::Mat distCoeffs = cv::Mat::zeros(8, 1, cv::DataType::type); //zero out distortion for now
//MARK: solvepnp
std::vector object_points;
object_points = {cv::Point3f(-marker_dim , marker_dim , 0),
cv::Point3f(marker_dim , marker_dim , 0),
cv::Point3f(marker_dim , -marker_dim , 0),
cv::Point3f(-marker_dim , -marker_dim , 0)};
std::vector> image_points = std::vector{corners[0][0], corners[0][1], corners[0][2], corners[0][3]};
std::cout << "object points: " << object_points << std::endl;
std::cout << "image points: " << image_points << std::endl;
cv::Mat rvec, tvec;
cv::solvePnP(object_points, image_points, intrinMat, distCoeffs, rvec, tvec);
cv::aruco::drawAxis(colorMat, intrinMat, distCoeffs, rvec, tvec, 3);
cv::Mat rotation, transform_matrix;
cv::Mat RotX(3, 3, cv::DataType::type);
cv::setIdentity(RotX);
RotX.at(4) = -1; //cos(180) = -1
RotX.at(8) = -1;
cv::Mat R;
cv::Rodrigues(rvec, R);
std::cout << "rvecs: " << rvec << std::endl;
std::cout << "cv::Rodrigues(rvecs, R);: " << R << std::endl;
R = R.t(); // rotation of inverse
std::cout << "R = R.t() : " << R << std::endl;
cv::Mat rvecConverted;
Rodrigues(R, rvecConverted); //
std::cout << "rvec in world coords:\n" << rvecConverted << std::endl;
rvecConverted = RotX * rvecConverted;
std::cout << "rvec scenekit :\n" << rvecConverted << std::endl;
Rodrigues(rvecConverted, rotation);
std::cout << "-R: " << -R << std::endl;
std::cout << "tvec: " << tvec << std::endl;
cv::Mat tvecConverted = -R * tvec;
std::cout << "tvec in world coords:\n" << tvecConverted << std::endl;
tvecConverted = RotX * tvecConverted;
std::cout << "tvec scenekit :\n" << tvecConverted << std::endl;
SCNVector4 rotationVector = SCNVector4Make(rvecConverted.at(0), rvecConverted.at(1), rvecConverted.at(2), norm(rvecConverted));
SCNVector3 translationVector = SCNVector3Make(tvecConverted.at(0), tvecConverted.at(1), tvecConverted.at(2));
std::cout << "rotation :\n" << rotation << std::endl;
transform_matrix.create(4, 4, CV_64FC1);
transform_matrix( cv::Range(0,3), cv::Range(0,3) ) = rotation * 1;
transform_matrix.at(0, 3) = tvecConverted.at(0,0);
transform_matrix.at(1, 3) = tvecConverted.at(1,0);
transform_matrix.at(2, 3) = tvecConverted.at(2,0);
transform_matrix.at(3, 3) = 1;
TransformModel *model = [TransformModel new];
model.rotationVector = rotationVector;
model.translationVector = translationVector;
return model;
}
swift code:
func initSceneKit() {
let scene = SCNScene()
cameraNode = SCNNode()
let camera = SCNCamera()
camera.zFar = 1000
camera.zNear = 0.1
cameraNode.camera = camera
scene.rootNode.addChildNode(cameraNode)
let scnView = sceneView!
scnView.scene = scene
scnView.autoenablesDefaultLighting = true
scnView.backgroundColor = UIColor.clear
let box = SCNBox(width: 10, height: 10 , length: 10, chamferRadius: 0)
boxNode = SCNNode(geometry: box)
boxNode.position = SCNVector3(0,0,0)
scene.rootNode.addChildNode(boxNode)
sceneView.pointOfView = cameraNode
}
func initCamera() {
let device = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera, for: .video, position: AVCaptureDevice.Position.back)
let deviceInput = try! AVCaptureDeviceInput(device: device!)
self.session = AVCaptureSession()
self.session.sessionPreset = AVCaptureSession.Preset.iFrame960x540
self.session.addInput(deviceInput)
let sessionOutput: AVCaptureVideoDataOutput = AVCaptureVideoDataOutput()
let outputQueue = DispatchQueue(label: "VideoDataOutputQueue", attributes: [])
sessionOutput.setSampleBufferDelegate(self, queue: outputQueue)
self.session.addOutput(sessionOutput)
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session)
self.previewLayer.backgroundColor = UIColor.black.cgColor
self.previewLayer.videoGravity = AVLayerVideoGravity.resizeAspect
self.previewView.layer.addSublayer(self.previewLayer)
self.session.startRunning()
view.setNeedsLayout()
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
//QR detection
guard let transQR = OpenCVWrapper.arucoTransformMatrix(from: pixelBuffer) else {
return
}
DispatchQueue.main.async(execute: {
self.setCameraMatrix(transQR)
})
}
func setCameraMatrix(_ transformModel: TransformModel) {
cameraNode.rotation = transformModel.rotationVector
cameraNode.position = transformModel.translationVector
// cameraNode.transform = transformModel.transform
}
[image of my result](https://i.stack.imgur.com/Calxx.jpg)
Repo on github with my project: [https://github.com/danilovdorin/ArucoAugmentedReality](https://github.com/danilovdorin/ArucoAugmentedReality)
↧
↧
Reduce the opencv framework size by using size optimization
Hi,
I am working on building a opencv.framework with a small size for our iOS application
According to https://github.com/opencv/opencv/wiki/Compact-build-advice, using the size optimization could have a good result for decreasing the framework size.
I also found that in https://help.apple.com/xcode/mac/current/#/itcaec37c2a6 we can do the optimization with GCC_OPTIMIZATION_LEVEL
I have tried to set that with
"set(CMAKE_XCODE_ATTRIBUTE_GCC_OPTIMIZATION_LEVEL "Smallest")"
in my CMakeList.txt file but it did not work, any one know what's the correct way to adding this setting in my framework build process?
Thanks
↧
Why is the processImage delegate not getting called automatically?
Hi OpenCV Community,
I hope everyone is well and safe!
In a native iOS application written in Objective-C, I am able to track a red laser and get the coordinates; however, I am having problems with dropping this code into a native iOS plugin for Unity which is a static library.
With that said, I am able to successfully set up the CvVideoCamera in the static library and compile the Unity application without any errors, but the processImage delegate is not getting called automatically as it does in the native application.
Below, I attached and labeled relevant codes for the working native iOS application along with the static library.
[Unity application compiled with static library](/upfiles/16069448401038685.png) - This number should ideally be 8 to confirm that the processImage delegate has been called; however, it remains 7 which only indicates that the camera is working. If I could get this working in the same manner as the native iOS application, I could send over the coordinates to Unity.
[Static library - Override.h](/upfiles/1606945296907428.png)
[Static library - Override.mm](/upfiles/1606945342286454.png)
[Static library - Override.mm 2](/upfiles/16069453256764516.png)
[Native iOS application - ViewContoller.h](/upfiles/16069453893565534.png)
[Native iOS application - ViewController.mm](/upfiles/16069454027097983.png)
[Native iOS application - ViewController.mm 2](/upfiles/16069454178343104.png)
If more information is needed, please let me know!
Best,
Steve
↧