- Detailed Features -

Multi Layer 2D/3D Real-time Compositing

Particles and Generative content

Cueing and animation

Video Mapping tools

3D Stage simulation

Integration in complex show setups

Smode is a multi-layer compositing engine working with 2D and 3D layers. The different layers are blended together in real-time with full support of the alpha channel and blending modes. Each layer can be placed, masked and modified with the result of the composition available in real-time at interactive frame rates (30 - 60 fps).
Layers can either be based on image or video files or they can be generated. For large compositions, you can easily group layers in a pre-composition. Compositions can also be referenced to be rendered inside other compositions, all of that in real-time.
In order to render 3D layers, Smode’s 3D engine implements numerous real-time rendering techniques including support for advanced materials, shadow mapping, volumetric lighting and 3D deformers. The 3D Engine of Smode enables the creation of primitive geometries from scratch or the import of models from FBX files.
Smode provides an efficient fully GPU-based particle simulation engine able to deal with up to millions of particles in real-time. Particles can be used to create very quickly all kind of content from fire to water and from stars to snow. Smode supports both particles without physics (aka “points”) and particles with physics, on which fields operate. 3D modifiers can be combined freely to modify all attributes of the particles. The particles can be rendered with sprites, spheres, boxes or custom geometries. Its also possible to produce trails in the form of motion lines or of secondary particles.
Tornade Smode

Beyond particles, Smode provides a number of techniques that enable creating content very quickly using few or even no data at all, i.e. in a fully generative fashion.
Smode Station provides a 3D stage environment for video projection and led screens real-time simulation and previewing. The stage let you define your video-projectors, projection surfaces and led screens in a 3D environment and provides a real-time simulation of the expected result. Once the stage is defined, you can push any video content through it - be it a still image, a video or generative content - and obtain a preview of the result in real-time.

Video projector simulation

Smode Station has an advanced video projectors simulation mode that takes luminosity, occlusions and resolutions into account. Thanks to this simulation, it is possible to make informed decisions in the early steps of a project, notably by identifying potential problems and comparing alternatives on how many projectors to use, which lens to use, how to position them, etc.

LED Screen simulation

Smode Station enables to model LED screens and LED strips directly in the 3D Stage space. Content can then be sent either directly onto the screens or into virtual projectors that affect individual led pixels.

Stage based previsualization

At any step of a project from the call for tender till the last day, the stage simulator enables to produce a fast real-time previsualization of video content. Thanks to this previsualization, artistic directors can take better decisions and graphic designers can have an immediate feedback on how their content will look like once on stage. Stage-based previsualizations can also be exported as videos to make client demonstrations.

Projector illumination preview on two video-mapped spheres

Image warping and Soft-edges

Smode Station provides a wide toolbox for multi-projector video mapping: warp grids and bezier grids with any number of points, soft edge edition, polygonal masks, etc. In addition to this, all the colorimetry and distortion modifiers applicable to content are also applicable to the video-projector images. There is thus a huge palette of effects that can be applied as per-display post-effects.

UV-based 3D Video Mapping

If a 3D model of the projection surfaces is available, it can be imported inside Smode through the FBX format. Content can then be defined in the UV-map of the model and Smode takes care of computing each video-projector view in real-time.

Programmable Pipeline

Instead of forcing one particular video processing pipeline, Smode Station provides a programmable pipeline that can cover the whole range of scenarios from the simplest 2D transformation to the most advanced video mapping setups with moving objects. Nodes in the pipeline correspond to content maps, video projectors, led screens, intermediate mappers and the final mapper. These nodes can be connected in all possible ways and 2D modifiers can be plugged anywhere in the processing chain.

Multi-channel content

On complex setups mixing different projection surfaces and led surfaces, it is often very constrained full to force all the content being defined inside a single rectangle of pixels. Similarly to, e.g. Dolby 5.1 for audio, Smode Station relies on the concept of multi-channel content. On the side of the pipeline, you define channels corresponding for example to a background LED screen, a video mapped surface and auxiliary LED screens. Once these channels are defined, content unit called scenes define the video content for every possible channel. This multi-channel concept enables a clean interface between content designers and setup designers making the boundaries of the responsibilities clear.
Thanks to advanced cueing and timeline concepts, its easy to animate any of the parameters of your compositions.

Animate anything!

Since compositing occurs in real-time, nearly every parameter of your compositions can be animated: from simple opacity parameters to advanced placement and deformation parameters in 2D and 3D layers. It is even possible to make one animation act on another one. This makes it possible to implement non-trivial logics such as making a loop animation, and then making other animations that can start or stop that loop.

Linear and non-linear animation

Smode supports both linear and non-linear animation. Linear animation is defined through timelines and corresponds to entirely pre-programmed blocks of animation. Non-linear animation is defined through cues, similarly to what is done in lighting encoding. Cues correspond to pieces of animation and they can be triggered either manually by the operator, or through a mapping with some external device, such as a DMX console or an incoming time code.

From full live to fully pre-programmed

Each piece of content can provide its own timelines and cues. Smode then supports different show scenarios:
  • Time-coded scenario: if both content order and timing are fully predefined, it is possible to create a huge timeline taking content-specific animations back into a single time referential, which can be mapped to an input time code.
  • Performing arts scenario: if the order of the actions is predefined but the precise timings is not, it is possible to create a sequence of cues that define an order among content loading/unloading and content-specific animations. During the show, the operator launches the GO on the cues in function of its landmarks.
  • Live scenario: if neither content order, nor timings are predefined, it is possible to directly trig content-specific animations in any order.
Smode station supports various inputs and outputs in order to integrate itself smoothly in complex show setups.

Control Input

Smode Station can be controled via MIDI control surfaces, DMX lighting consoles, or via the ArtNet and OSC protocols. Smode Station relies on a system of rules for linking input devices to specific actions in the composition. Such actions can for example be to make a layer visible, to change an opacity parameter or to launch an animation. To create a new rule, the learn feature enables to quickly associate a specific controller to the parameter being automated. It is even possible to make a rule that activate/deactivate other rules. The assignation of external controllers is thus totally programmable, which enables advanced customization depending on the specific needs of the show.

Clock input

Smode Station supports three kind of clocks: LTC audio time code (LTC), midi time code (MTC) and the current time clock. These clocks can be used to synchronize animations or to trig rules at specific times. It is also possible to use the current time clock to make generatif content depending on the current hour and date.

Video Input

Optionally, Smode Station can be shipped with 4x3G SDI inputs or 2xDVI HD1080p video capture cards. Smode has dedicated implementations for those cards, which enable having very low latency (less than two frames). For DeltaCast cards, we support DirectGMA, a technology enabling to transfer video data directly from the capture card to the graphics card. This enables Smode to capture up to 2 HD signals at 50 frames per second. Input video streams are treated like any other layer in Smode: it is possible to apply any 2D modifiers on them, to mask them, or even to generate 3D content depending on them.

DMX Screen and Control outputs

It is possible to output Smode Station’s content onto DMX screens through the ArtNet protocol. Such screens are treated like LED screens and are created and previsualized in the stage simulator. It is also possible to control some specific devices via Smode Station such as Panasonic Video Projector shutter and settings control (PJLink) and electromechanically motorized Video Projector (Robe Minime or Beam 1800).