Search code examples
swiftunity-game-engineaugmented-realityarkitaranchor

Which method should be used to achieve the most accurate world tracking experience on ARKit?


My company is developing an Augmented Reality app for a client, using ARKit, and they want the best world tracking experience on iOS. We have told them that this is not an exact science and that small variations are perfectly normal, but they want us to do anything possible to minimize errors, drifts and movements from holograms.

We have tested some ARHitTestResult examples on Unity and some others that use ARImageAnchor, but we can't decide on which is the best. It seems that using an ARImageAnchor improves tracking, but I am not sure if that is really the case or if this is just an optical illusion.

What's the best – ARImageAnchor vs plain ARAnchor?

Please advise or share any Unity/Apple documentation on this matter.

Thanks.


Solution

  • Recommendations for accurate tracking

    Any anchor in ARKit (ARFaceAnchor, ARGeoAnchor, ARImageAnchor, ARPlaneAnchor, etc), inherits from ARAnchor parent class and, in some cases, conforms to ARTrackable protocol.

    Every anchor in ARKit has its purpose (for instance, ARPlaneAnchor is a special version of ARAnchor that designated for plane detection process). I don't think that one anchor is more precise than another. All you need to provide for a robust tracking results is a good lighting condition, distinguishable 3D surfaces and high-contrast textures on real-world objects. Also pre-saved ARWorldMap is an excellent starting point for a persistent AR experience.

    Don't use repetitive texture/objects patterns and surfaces with solid colors when tracking your scene. For best tracking results don't track moving objects. You need only static environment. And I should say that all Apple devices are good-calibrated for tenable and precise AR experience.

    Additional tips for robust AR experience

    If you turn on image detection option in 6 DOF ARWorldTrackingConfiguration() you'll get ARImageAnchor objects (for each detected image) that are a data on the position and orientation of images detected in a world-tracking AR session. That doesn't improve a precision of world tracking but that slows down a processing speed.

    guard let refImgs = ARReferenceImage.referenceImages(inGroupNamed: "ARGroup", 
                                                               bundle: nil) 
    else { fatalError("Missing expected resources.") }
    
    let config = ARWorldTrackingConfiguration()
    config.detectionImages = refImgs
    config.maximumNumberOfTrackedImages = 3
    session.run(config, options: [.resetTracking, .removeExistingAnchors])
    

    A world-tracking session with image tracking enabled can simultaneously track only a small number of images. You can track more images with ARImageTrackingConfiguration. But image detection accuracy and performance are considerably reduced with larger numbers of detection images.

    With ARImageTrackingConfiguration, ARKit establishes a 3D space not by tracking the motion of the device relative to the world, but solely by detecting and tracking the motion of known 2D images in view of the camera. Image-only tracking lets you anchor virtual content to known images only when those images are in view of the camera. World tracking with image detection lets you use known images to add virtual content to the 3D world, and continues to track the position of that content in world space even after the image is no longer in view. World tracking works best in a stable, nonmoving environment. You can use image-only tracking to add virtual content to known images in more situations — for example, an advertisement inside a moving subway car.

    So, using ARImageAnchors in your scene doesn't add the extra layer of quality for World Tracking Results. Check Recognizing Images in an AR Experience article for detailed info.