SMODE by D/LabsSmode is a revolutionary real-time compositing and generative content engine technology developed by D/Labs. Thanks to real-time rendering, many things can be changed at the crucial — and most costly — time: when the stage, director, actors, lighting, sound, and cameras are available. This turns Smode into an incredible booster of creativity and aesthetic that deeply changes the approach to video content encoding and manipulation in the event and entertainment industries.
Traditional WorkflowMost simple graphical operations such as moving a layer, changing a mask or improving colorimetry traditionally imply the nightmare of re-rendering all the content a day before the event, an operation quickly taking hours and generating much stress.
The process of rendering, encoding and transferring huge video files becomes today a major bottleneck in the video artistic chain of events.
Smode's Real-time WorkflowWith Smode, making most graphical changes is the matter of seconds, instead of hours. Thanks to real-time rendering, the feedback is immediate and it becomes possible to explore combinations and adaptations of the content and to validate them directly on the field with the artistic director.
To make this work, Smode relies on the concept of real-time compositions: pieces of content rendered in real-time, created by the content design team and finalized and displayed on-site by the operating team.
- Products -
Smode is proposed in three forms: the media-server Smode Station, the software for content creation Smode Studio and the free software Smode Synth.
- References -
Smode is being developed since 2001 and is used by D/Labs since 2007 on a wide range of different events. The development of Smode has always been heavily influenced by this experience on the field.
Here is a selection of projects from D/Labs, that have used Smode and impacted its development:
- Features -
Multi Layer 2D/3D Real-time Compositing
Particles and Generative content
Cueing and animation
Video Mapping tools
3D Stage simulation
Integration in complex show setups
Smode is a multi-layer compositing engine working with 2D and 3D layers. The different layers are blended together in real-time with full support of the alpha channel and blending modes. Each layer can be placed, masked and modified with the result of the composition available in real-time at interactive frame rates (30 - 60 fps).
Layers can either be based on image or video files or they can be generated. For large compositions, you can easily group layers in a pre-composition. Compositions can also be referenced to be rendered inside other compositions, all of that in real-time.
In order to render 3D layers, Smode’s 3D engine implements numerous real-time rendering techniques including support for advanced materials, shadow mapping, volumetric lighting and 3D deformers. The 3D Engine of Smode enables the creation of primitive geometries from scratch or the import of models from FBX files.
Smode provides an efficient fully GPU-based particle simulation engine able to deal with up to millions of particles in real-time. Particles can be used to create very quickly all kind of content from fire to water and from stars to snow. Smode supports both particles without physics (aka “points”) and particles with physics, on which fields operate. 3D modifiers can be combined freely to modify all attributes of the particles. The particles can be rendered with sprites, spheres, boxes or custom geometries. Its also possible to produce trails in the form of motion lines or of secondary particles.
Beyond particles, Smode provides a number of techniques that enable creating content very quickly using few or even no data at all, i.e. in a fully generative fashion.
Smode Station provides a 3D stage environment for video projection and led screens real-time simulation and previewing. The stage let you define your video-projectors, projection surfaces and led screens in a 3D environment and provides a real-time simulation of the expected result. Once the stage is defined, you can push any video content through it -- be it a still image, a video or generative content -- and obtain a preview of the result in real-time.
Video projector simulationSmode Station has an advanced video projectors simulation mode that takes luminosity, occlusions and resolutions into account. Thanks to this simulation, it is possible to make informed decisions in the early steps of a project, notably by identifying potential problems and comparing alternatives on how many projectors to use, which lens to use, how to position them, etc.
LED Screen simulationSmode Station enables to model LED screens and LED strips directly in the 3D Stage space. Content can then be sent either directly onto the screens or into virtual projectors that affect individual led pixels.
Stage based previsualizationAt any step of a project from the call for tender till the last day, the stage simulator enables to produce a fast real-time previsualization of video content. Thanks to this previsualization, artistic directors can take better decisions and graphic designers can have an immediate feedback on how their content will look like once on stage. Stage-based previsualizations can also be exported as videos to make client demonstrations.
Projector illumination preview on two video-mapped spheres
Image warping and Soft-edgesSmode Station provides a wide toolbox for multi-projector video mapping: warp grids and bezier grids with any number of points, soft edge edition, polygonal masks, etc. In addition to this, all the colorimetry and distortion modifiers applicable to content are also applicable to the video-projector images. There is thus a huge palette of effects that can be applied as per-display post-effects.
UV-based 3D Video MappingIf a 3D model of the projection surfaces is available, it can be imported inside Smode through the FBX format. Content can then be defined in the UV-map of the model and Smode takes care of computing each video-projector view in real-time.
Instead of forcing one particular video processing pipeline, Smode Station provides a programmable pipeline that can cover the whole range of scenarios from the simplest 2D transformation to the most advanced video mapping setups with moving objects. Nodes in the pipeline correspond to content maps, video projectors, led screens, intermediate mappers and the final mapper. These nodes can be connected in all possible ways and 2D modifiers can be plugged anywhere in the processing chain.
Multi-channel contentOn complex setups mixing different projection surfaces and led surfaces, it is often very constrained full to force all the content being defined inside a single rectangle of pixels. Similarly to, e.g. Dolby 5.1 for audio, Smode Station relies on the concept of multi-channel content. On the side of the pipeline, you define channels corresponding for example to a background LED screen, a video mapped surface and auxiliary LED screens. Once these channels are defined, content unit called scenes define the video content for every possible channel. This multi-channel concept enables a clean interface between content designers and setup designers making the boundaries of the responsibilities clear.
Thanks to advanced cueing and timeline concepts, its easy to animate any of the parameters of your compositions.
Animate anything!Since compositing occurs in real-time, nearly every parameter of your compositions can be animated: from simple opacity parameters to advanced placement and deformation parameters in 2D and 3D layers. It is even possible to make one animation act on another one. This makes it possible to implement non-trivial logics such as making a loop animation, and then making other animations that can start or stop that loop.
Linear and non-linear animationSmode supports both linear and non-linear animation. Linear animation is defined through timelines and corresponds to entirely pre-programmed blocks of animation. Non-linear animation is defined through cues, similarly to what is done in lighting encoding. Cues correspond to pieces of animation and they can be triggered either manually by the operator, or through a mapping with some external device, such as a DMX console or an incoming time code.
From full live to fully pre-programmedEach piece of content can provide its own timelines and cues. Smode then supports different show scenarios:
- Time-coded scenario: if both content order and timing are fully predefined, it is possible to create a huge timeline taking content-specific animations back into a single time referential, which can be mapped to an input time code.
- Performing arts scenario: if the order of the actions is predefined but the precise timings is not, it is possible to create a sequence of cues that define an order among content loading/unloading and content-specific animations. During the show, the operator launches the GO on the cues in function of its landmarks.
- Live scenario: if neither content order, nor timings are predefined, it is possible to directly trig content-specific animations in any order.
Smode station supports various inputs and outputs in order to integrate itself smoothly in complex show setups.