How WebAR experiences are developed

Part 2 of our series is about the various possibilities offered by AR technology. Questions about how Web AR works are also covered.

Web AR experiences are currently more popular than ever - on the customer as well as on the company side. Because they involve consumers interactively, AR Experiences enable extraordinary customer journeys. In our series "Behind the Tech", we therefore take a look at the topic of Web AR and provide insights into the development of the experiences. The first part deals with the different development phases of Web AR. Click here for the article.

An agency will usually take care of most of these considerations for you and advise you on the best way to proceed. However, you should be aware that it is extremely rare to be able to "buy" a customized web AR experience for a brand or a company from generic, completely unprocessed objects from a 3D web portal.

Therefore, especially if you have very clear ideas of your own, collect possible input before the storyboard phase that you can give the developers for the design of the experience or the objects in it. This could be reference images from best practice examples of AR experiences, suggestions for surface textures or blueprints of 3D objects.

On the one hand, the agency will research suitable objects on 3D web portals, acquire them for your experience and – if necessary – edit colors, textures or even the geometry of the objects until they meet the qualitative requirements for appearance and performance. On the other hand, developers will have to model special objects from scratch. Time-consumers – especially in realistic, highly polygonal experiences – are mainly render times. The more polygons an object has, the more processing power must be used for rendering.

More details require more time

The opposite, i.e. the retrospective reduction of polygons, can also take up a lot of time in a project. This occurs, for example, when extremely detailed CAD data of a technical product is to form the basis for an animation. Often such data contains a large amount of details that are not necessary for the actual experience, e.g. small screws and parts in an engine model. The developer has to "calculate" these out afterwards.

If you have not opted for the low poly variant for the look of your world, you should also expect a higher expenditure of time and money when modeling and animating 3D vegetation, but also other "accessories" such as clouds or liquids. Even if such assets can be purchased, they are often expensive and highly polygonal. The latter should in no way lead to the actual focus objects suffering, for example because the frame rate of the experience is reduced. This is usually noticeable in jerky animations or a delay when moving the smartphone around an object. Therefore, a reduction is usually sensible here as well.

Understanding the work processes behind 3D animations

Speaking of animations: It is very likely that your experience will contain animated objects. Here, too, it is important to consider which animations should be exported as a preset directly from a 3D software and which the development team can produce independently in the code. Animations that are relatively easy to create in the code are, for example, constant rotations of an object or a movement from point A to point B. They are also very easy to create later. They are also very flexible afterwards. If, for example, the rotation circle is too large or too small, or an object floats in from too great a height, the developer can change this very easily.

In the case of more complex animations, such as a skeleton animation, a subsequent adjustment in the code of the Web AR Experience is much more complicated. Why? 3D animations - similar to animated films - are still created with so-called keyframes. These are key frames that specify, for example, the start and end points of a movement. In animated films, the "keyframer" specifies such points in "key frames", and the "inbetweener" draws all the images in between ("interframes"), which finally turn the whole thing into a fluid movement.

3D animations also have keyframes and interframes. What does this mean for our exemplary skeleton animation? A standard 3D skeleton consists of 65 bones that are highly interdependent when moving. If the developer changes the keyframe A of a movement, e.g. the position of the right leg, not only the individual parameters of the respective "bones" within the leg for keyframe A must be changed in the code, but also those of all following interframes. Otherwise the movement would look unnatural.

How a developer works

In practice, this means that the keyframes of the animation to be changed have to be defined again from scratch by the developer in a 3D animation software. The software takes over the calculation of all interframes and the developer can check much faster whether the motion sequence is coherent and looks natural. Although the effort may remain high despite the use of the software, it is much lower compared to writing the code manually.

The developer can finally export the finished code of the new animation from the 3D software and embed it in the code of the Web AR environment. Certain data must be stored directly in the object. This process is called "baking" in technical jargon. On the one hand, “baking” ensures that light reflections, shadows and textures of the object are preserved during export.

On the other hand, animation data can also be saved in the geometry of the object by “baking”. In this way, the animation data is not lost during an export. Depending on the complexity of the animation, this also means a greater time effort. Common web-friendly 3D formats for export are GLTF or GLB.

Development and UX Design

Since we are creating an interactive, virtual world in 3D, the work steps in developing the experience are similar to those in classic game development. First, all exported 3D assets and all other required assets, such as typographic elements or sound files, are imported into the development environment. As a rule, the look can also be optimized in this phase and further visual effects, such as bloom, glare or particle effects (images in each case), can be added, for example to present products in a more attractive or emotional way.

The actual logic of the experience is now programmed on the basis of the storyboard and all assets. A well thought-out UX design is crucial here, which encourages the user to playfully discover the experience by interacting with the virtual world. Taps and GUIs (Graphical User Interfaces) can now be used to link interactions with the assets. In the skeleton animation described above, this could be lifting the leg with one tap on the screen, or lowering it with two taps, or similar. It all depends on the previously defined concept, the storyline and the UX design of the experience.

Manage content yourself

Finally, it should be mentioned that it is generally favorable for clients to be able to manage and edit the web experience themselves. For this purpose, the agency should create access to a content management system (CMS) - similar to a homepage. This can save time and money, especially when information changes frequently.

There are certainly many more aspects that contribute to the optimal success of a Web AR project. However, with the background knowledge presented here, you are definitely on a very good level to successfully start your own Web AR project with the service provider of your choice.

Would you like to have more information about the Bay?

By clicking on “Submit” you agree to the personal by RECORDBAY as described in the Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.