V-Ray Application SDK

Contents

V-Ray Application SDK

User Guide

I. Introduction

The V-Ray Application Software Development Kit, also known as V-Ray App SDK, AppSDK or V-Ray Application SDK, allows third-party integrators to initiate and control a rendering process that makes use of the V-Ray engine. It provides a high-level API that enables users to render and manipulate a V-Ray scene inside the host process or outside it using distributed rendering. The scene may be created in memory by translating the native format to V-Ray plugins or it could be loaded from a file, potentially exported from another application.

II. Setup

In order to use V-Ray Application SDK, the VRAY_SDK environment variable should be set. To use the Python binding, the python folder must be in the Python module load paths (e.g. by adding it to PYTHONPATH). The setenv script will set both environment variables. To use it, call setenv.bat on Windows or source setenv.sh on Linux/MacOS.

When VRAY_SDK is set, the AppSDK binding will search for the VRaySDKLibrary binary and its dependencies inside VRAY_SDK/bin. If the directory structure of a product that uses AppSDK doesn’t include binaries inside a /bin folder, VRAY_APPSDK_BIN can be used instead of VRAY_SDK. In that case, the AppSDK binding will try to load the binaries directly from the folder that VRAY_APPSDK_BIN points at.

In the case that no environment variables could be used, the AppSDK API allows setting the search path at runtime (via vray.setSDKLibraryPath(bin_folder) or `VRayInit vray(bin_folder) constructor in C++). Please read the API documentation for the corresponding binding.

III. Basic Rendering

The V-Ray Application SDK exposes a VRayRenderer class that initiates and controls any rendering process. The class defines high-level methods for creating or loading a V-Ray scene and rendering it. It hides the inner complexities of the V-Ray engine while keeping its powerful capabilities.

The basic workflow for starting a render process usually consists of the following steps:

All language implementations of the VRayRenderer class offer a method without arguments (start()) that is used to start the rendering of the currently loaded scene. The method is a non-blocking call which internally runs the V-Ray engine in a separate thread. Thus the rendering can conveniently take place as a background process without the need to use language-specific tools for creating threads.

The VRayRenderer class gives access to the current state of the rendered image at any point in time. Whether the render process has finished or not, an image can be extracted to track the progress of the rendering. All language bindings expose a method that returns an image object which holds the rendered image data at the time of the request. This image class can be used for direct display or to save the rendered image to a file in one of several supported popular compression formats.

#include "vraysdk.hpp"
using namespace VRay;

int main() {
    VRayInit init;

    VRayRenderer renderer;
    renderer.load("./intro.vrscene");
    renderer.start();

    renderer.waitForRenderEnd(6000);
    VRayImage* image = renderer.getImage();
    image->saveToPng("intro.png");
    delete image;
}
using VRay;

using (VRayRenderer renderer = new VRayRenderer())
{
    renderer.Load("./intro.vrscene");
    renderer.Start();

    renderer.WaitForRenderEnd(6000);
    VRayImage image = renderer.GetImage();
    image.SaveToPNG("intro.png");
}
import vray

with vray.VRayRenderer() as renderer:
    renderer.load('./intro.vrscene')
    renderer.start()

    renderer.waitForRenderEnd(6000)
    image = renderer.getImage()
    image.save('intro.png')
var vray = require('vray');
var renderer = vray.VRayRenderer();

renderer.load('./intro.vrscene', function(err) {
    if (err) throw err;
    renderer.start(function(err) {
        if (err) throw err;
        renderer.waitForRenderEnd(6000, function() {
            var image = renderer.getImage();
            image.save('intro.png', function() {
                renderer.close();
            });
        });
    });
});

IV. Render Modes

The V-Ray Application SDK offers the ability to run the V-Ray engine in two distinct render modes - Production and Interactive.

Production mode is suitable for obtaining the final desired image after the scene has been carefully configured as the render process can take a lot of time before the whole image is available.

Interactive mode on the other hand is very useful for fast image preview and progressive image quality improvement which makes it the perfect choice for experimenting with scene settings and modifying content and receiving fast render feedback.

Each render mode is supported in two flavors - CPU and GPU. The former utilizes the CPU resources while the latter takes advantage of the graphics card computing power. The GPU rendering type is in turn subdivided into two modes - CUDA and Optix (both nVidia only) depending on the technology that is used with the GPU.

The type of rendering that will be initiated can be chosen easily through the renderMode property of the VRayRenderer object. The default render mode for the VRayRenderer class is Interactive (CPU). The render mode can be changed between renders to avoid having to re-create or re-load the scene. This allows you to work interactively on a scene with V-Ray and then to switch to production mode for final rendering without any overhead.

VRayInit init;

VRayRenderer renderer(options);
renderer.setRenderMode(RenderMode::RENDER_MODE_PRODUCTION);
renderer.load("./intro.vrscene");
renderer.start();

renderer.waitForRenderEnd(6000);
VRayImage* image = renderer.getImage();
image->saveToPng("intro.png");
delete image;
using (VRayRenderer renderer = new VRayRenderer())
{
    renderer.RenderMode = RenderMode.Production;
    renderer.Load("./intro.vrscene");
    renderer.Start();

    renderer.WaitForRenderEnd(6000);
    VRayImage image = renderer.GetImage();
    image.SaveToPNG("intro.png");
}
with vray.VRayRenderer() as renderer:
    renderer.renderMode = 'production'
    renderer.load('./intro.vrscene')
    renderer.start()

    renderer.waitForRenderEnd(6000)
    image = renderer.getImage()
    image.save('intro.png')
var renderer = vray.VRayRenderer();
renderer.renderMode = 'production';

renderer.load('./intro.vrscene', function(err) {
    if (err) throw err;
    renderer.start();

    renderer.waitForRenderEnd(6000, function() {
        var image = renderer.getImage();
        image.save('intro.png', function() {
            renderer.close();
        });
  });
});

V. Common Events

During the rendering process that runs in the background in a separate thread, V-Ray emits a number of events that are available to AppSDK users. You can subscribe to these events through the main VRayRenderer class.

The V-Ray Application SDK provides the means for attaching callbacks that execute custom client logic when an event occurs. The code of the callbacks is executed in a separate thread that is different from the main V-Ray rendering thread. This design allows the V-Ray rendering process to remain fast without additional overhead and ensures that slow operations in client callback code will not affect the total rendering time. Keep in mind that slow user callbacks may delay the execution of other callbacks. Each callback includes a time ‘instant’ argument that gives you the exact time the event ocurred. The callback itself may be executed at a noticeably different moment due to queueing and being asynchronous.

The events can be broadly classified into three types: events common for all render modes, events specific to bucket rendering and events specific to progressive rendering. This section covers the events that are emitted regardless of the type of sampling.

The common render events occur when:

State Changed Event

The state change event is emitted every time the renderer transitions from one state to another, such as when starting to render, when the image is done or when an error occurs and it stopped (see diagram). This is the main point to implement user logic related to initiating and completing renders.

State Changed Diagram

Unless startSync() is used instead of the asynchronous start() method, no changes should be made to the scene and renderer while in the PREPARING state before transitioning out of it. Attempting to make changes will be rejected.

The final image is available when the renderer gets to the IDLE_DONE state. Intermediate images are also available during rendering.

void onStateChanged(VRayRenderer& renderer, RendererState oldState, RendererState newState, double instant, void* userData)
{
    printf("State changed from %d to %d\n", oldState, newState);
}

int main() {
    VRayInit init;

    VRayRenderer renderer;
    renderer.setOnStateChanged(onStateChanged);

    renderer.load("./intro.vrscene");
    renderer.start();

    renderer.waitForRenderEnd(6000);
}
using (VRayRenderer renderer = new VRayRenderer())
{
    renderer.StateChanged += new EventHandler((source, e) =>
    {
        Console.WriteLine("State changed from {0} to {1}", e.OldState, e.NewState);
    });

    renderer.Load("./intro.vrscene");
    renderer.Start();

    renderer.WaitForRenderEnd(6000);
}
def onStateChanged(renderer, oldState, newState, instant):
    print('State changed from', oldState, 'to', newState)

with vray.VRayRenderer() as renderer:
    renderer.setOnStateChanged(onStateChanged)

    renderer.load('./intro.vrscene')
    renderer.start()

    renderer.waitForRenderEnd(6000)
var renderer = vray.VRayRenderer();

renderer.on('stateChanged', function(oldState, newState, instant) {
    console.log('State changed from ' + oldState + ' to ' + newState);
});

renderer.load('./intro.vrscene', function(err) {
    if (err) throw err;
    renderer.start(function(err) {
        if (err) throw err;
        renderer.waitForRenderEnd(6000, function() {
            renderer.close();
        });
    });
});

Log Message Event

Output messages are produced by the V-Ray engine during scene loading and rendering and they can be captured by subscribing to the message log event. The callback data that becomes available when a message is logged is the text of the message and the log level type (info, warning, error or debug).

void onLogMessage(VRayRenderer &renderer, const char* message, MessageLevel level, double instant, void* userData) {
    switch (level) {
    case MessageError:
        printf("[ERROR] %s\n", message);
        break;
    case MessageWarning:
        printf("[Warning] %s\n", message);
        break;
    case MessageInfo:
        printf("[info] %s\n", message);
       break;
    }
}

int main() {
    VRayInit init(NULL, true);
    VRayRenderer renderer;
    renderer.setOnLogMessage(onLogMessage);
    // ...
    return 0;
}
using (VRayRenderer renderer = new VRayRenderer())
{
    renderer.LogMessage += new EventHandler<MessageEventArgs>((source, e) =>
    {
        // You can remove the if for testing, but you might want to ignore Debug in real code
        if (e.LogLevel != LogLevelType.Debug)
        {
            Console.WriteLine(String.Format("[{0}] {1}", e.LogLevel.ToString(), e.Message));
        }
    });

    // ...
}
def onLogMessage(renderer, message, level, instant):
    if level == vray.LOGLEVEL_ERROR:
        print("[ERROR]", message)
    elif level == vray.LOGLEVEL_WARNING:
        print("[Warning]", message)
    elif level == vray.LOGLEVEL_INFO:
        print("[info]", message)

with vray.VRayRenderer() as renderer:
    renderer.setOnLogMessage(onLogMessage)

    # ...
var renderer = vray.VRayRenderer();

renderer.on('logMessage', function(message, level, instant) {
    if (level == vray.LOGLEVEL_ERROR)
        console.log("[ERROR] ", message);
    else if (level == vray.LOGLEVEL_WARNING)
        console.log("[Warning] ", message);
    else if (level == vray.LOGLEVEL_INFO)
        console.log("[info] ", message);
});

// ...

Progress Event

Progress events are emitted whenever the current task changes and also when the amount of work done increases. They include a short message with the current task name and two numbers for the total amount of work to do and the work that is already complete.

VI. Bucket Events

There are two types of image sampling in Production mode - “bucket” sampling and progressive sampling. Bucket rendering splits the image into small rectangular sub-images, each processed independently by a different local CPU thread or network server (see the Distributed Rendering section). The sub-images are only returned when completely ready. Progressive rendering works the same way as in Interactive mode - the whole image is sampled and images are returned for each sampling pass, reducing noise with each pass. This section covers the events which are specific to Production bucket rendering only. The progressive sampling mode emits the event described in the Progressive Events section further below.

There are two phases in Production bucket rendering - assigning a bucket to a render host (or local thread) and rendering the assigned region of the image. This is why there are two events that are raised for each bucket of the image - initializing a bucket and receiving the image result. Users of the API can subscribe to the events via the main VRayRenderer class.

In Production bucket mode the image is perceived as a grid of rectangular regions, "buckets". The bucket is uniquely identified by the coordinates of its top left corner and the width and height of its rectangular region. The top left corner of the whole image has coordinates (0, 0).

To enable bucket sampling, the VRayRenderer’s render mode must be production and SettingsImageSampler::type=1.

Bucket Init Event

The bucket init event is raised when the main V-Ray thread assigns a bucket to a network render host or local thread. The callback data that is provided for this event is the bucket size and coordinates as well as the name of the render host (if any) as it appears on the network.

using namespace VRay;
using namespace VRay::Plugins;

void onBucketInit(VRayRenderer& renderer, int x, int y, int width, int height
    , const char* host, ImagePassType pass, double instant, void* userData) {
    printf("Starting bucket:\n");
    printf("\t x: %d\n", x);
    printf("\t y: %d\n", y);
    printf("\t width: %d\n", width);
    printf("\t height: %d\n", height);
    printf("\t host: %s\n", host);
}

int main() {
    VRayInit init(NULL, true);

    VRayRenderer renderer;
    renderer.setRenderMode(VRayRenderer::RenderMode::RENDER_MODE_PRODUCTION);
    renderer.setOnBucketInit(onBucketInit);

    renderer.load("./intro.vrscene");
    // Set the sampler type to adaptive (buckets).
    SettingsImageSampler sis = renderer.getInstanceOrCreate<SettingsImageSampler>();
    sis.set_type(1);
    // Stop LC calculation, so we get buckets immediately.
    SettingsGI sgi = renderer.getInstanceOrCreate<SettingsGI>();
    sgi.set_secondary_engine(2);

    renderer.startSync();
    renderer.waitForRenderEnd();
    return 0;
}
using VRay;
using VRay.Plugins;

using (VRayRenderer renderer = new VRayRenderer())
{
    renderer.RenderMode = RenderMode.PRODUCTION;
    renderer.BucketInit += new EventHandler<BucketRegionEventArgs>((source, e) =>
    {
        Console.WriteLine("Starting bucket:");
        Console.WriteLine("\t x:" + e.X);
        Console.WriteLine("\t y:" + e.Y);
        Console.WriteLine("\t width:" + e.Width);
        Console.WriteLine("\t height:" + e.Height);
        Console.WriteLine("\t host:" + e.Host);
    });

    renderer.Load("./intro.vrscene");
    // Set the sampler type to adaptive (buckets).
    SettingsImageSampler sis = renderer.GetInstanceOrCreate<SettingsImageSampler>();
    sis.SettingsImageSamplerType = 1;
    // Stop LC calculation, so we get buckets immediately.
    SettingsGI sgi = renderer.GetInstanceOrCreate<SettingsGI>();
    sgi.SecondaryEngine = 2;

    renderer.Start();
    renderer.WaitForRenderEnd();
}
}
def onBucketInit(renderer, bucket, passType, instant):
    print('Starting bucket:')
    print('\t x: ', bucket.x)
    print('\t y: ', bucket.y)
    print('\t width: ', bucket.width)
    print('\t height:', bucket.height)
    print('\t host:', bucket.host)

with vray.VRayRenderer() as renderer:
    renderer.renderMode = 'production'
    renderer.setOnBucketInit(onBucketInit)

    renderer.load('./intro.vrscene')
    # Set the sampler type to adaptive(buckets).
    sis = renderer.classes.SettingsImageSampler.getInstanceOrCreate()
    sis.type = 1
    # Disable light cache so we get buckets straight away.
    sgi = renderer.classes.SettingsGI.getInstanceOrCreate()
    sgi.secondary_engine = 2

    renderer.start()
    renderer.waitForRenderEnd()
var renderer = vray.VRayRenderer();
renderer.renderMode = 'production';

renderer.on('bucketInit', function(region, passType, instant) {
    console.log('Starting bucket:');
    console.log('\t x:' + region.x);
    console.log('\t y:' + region.y);
    console.log('\t width:' + region.width);
    console.log('\t height:' + region.height);
    console.log('\t host:' + region.host);
});

renderer.load('./intro.vrscene', function(err) {
    if (err) throw err;
    // Set the sampler type to adaptive(buckets).
    var sis = renderer.classes.SettingsImageSampler.getInstanceOrCreate();
    sis.type = 1;
    // Disable light cache so we get buckets straight away.
    var sgi = renderer.classes.SettingsGI.getInstanceOrCreate();
    sgi.secondary_engine = 2;
    renderer.start(function(err) {
        if (err) throw err;
        renderer.waitForRenderEnd(function() {
            renderer.close();
        });
    });
});

Bucket Ready Event

The bucket ready event is raised when the render host that has been assigned a bucket has finished rendering the part of the image and has sent its result to the main V-Ray thread. The returned callback data for the event contains the size and coordinates of the region, the render host name and the produced image.

using namespace VRay;
using namespace VRay::Plugins;

void onBucketReadyCallback(VRayRenderer& renderer, int x, int y, const char* host
    , VRayImage* image, ImagePassType pass, double instant, void* userData) {
    printf("Bucket ready:\n");
    printf("\t x: %d\n", x);
    printf("\t y: %d\n", y);
    printf("\t width: %d\n", image->getWidth());
    printf("\t height: %d\n", image->getHeight());
    printf("\t host: %s\n", host);

    char fileName[64];
    sprintf(fileName, "intro-%d-%d.png", x, y);
    image->saveToPng(fileName);
}

int main() {
    VRayInit init(NULL, true);

    VRayRenderer renderer;
    renderer.setRenderMode(VRayRenderer::RenderMode::RENDER_MODE_PRODUCTION);
    renderer.setOnBucketReady(onBucketReadyCallback);

    renderer.load("./intro.vrscene");
    // Set the sampler type to adaptive (buckets).
    SettingsImageSampler sis = renderer.getInstanceOrCreate<SettingsImageSampler>();
    sis.set_type(1);
    // Stop LC calculation, so we get buckets immediately.
    SettingsGI sgi = renderer.getInstanceOrCreate<SettingsGI>();
    sgi.set_secondary_engine(2);

    renderer.startSync();
    renderer.waitForRenderEnd();
    return 0;
}
using VRay;
using VRay.Plugins;

using (VRayRenderer renderer = new VRayRenderer())
{
    renderer.RenderMode = RenderMode.PRODUCTION;
    renderer.BucketReady += new EventHandler<BucketImageEventArgs>((source, e) =>
    {
        Console.WriteLine("Bucket ready:");
        Console.WriteLine("\t x:" + e.X);
        Console.WriteLine("\t y:" + e.Y);
        Console.WriteLine("\t width:" + e.Width);
        Console.WriteLine("\t height:" + e.Height);
        Console.WriteLine("\t host:" + e.Host);

        VRayImage image = e.Image;
        image.SaveToPNG(string.Format("intro-{0}-{1}.png", e.X, e.Y));
        image.Dispose();
    });

    renderer.Load("./intro.vrscene");
    // Set the sampler type to adaptive (buckets).
    SettingsImageSampler sis = renderer.GetInstanceOrCreate<SettingsImageSampler>();
    sis.SettingsImageSamplerType = 1;
    // Stop LC calculation, so we get buckets immediately.
    SettingsGI sgi = renderer.GetInstanceOrCreate<SettingsGI>();
    sgi.SecondaryEngine = 2;

    renderer.Start();
    renderer.WaitForRenderEnd();
}
def onBucketReady(renderer, bucket, passType, instant):
    print('Bucket ready:')
    print('\t x: ', bucket.x)
    print('\t y: ', bucket.y)
    print('\t width: ', bucket.width)
    print('\t height:', bucket.height)
    print('\t host:', bucket.host)

    fileName = 'intro-{0}-{1}.png'.format(bucket.x, bucket.y)
    bucket.save(fileName)

with vray.VRayRenderer() as renderer:
    renderer.renderMode = 'production'
    renderer.setOnBucketReady(onBucketReady)

    renderer.load('./intro.vrscene')
    # Set the sampler type to adaptive(buckets).
    sis = renderer.classes.SettingsImageSampler.getInstanceOrCreate()
    sis.type = 1
    # Disable light cache so we get buckets straight away.
    sgi = renderer.classes.SettingsGI.getInstanceOrCreate()
    sgi.secondary_engine = 2

    renderer.start()
    renderer.waitForRenderEnd()
var renderer = vray.VRayRenderer();
renderer.renderMode = 'production';

renderer.on('bucketReady', function(bucket, passType, instant) {
    console.log('Bucket ready:');
    console.log('\t x:' + bucket.x);
    console.log('\t y:' + bucket.y);
    console.log('\t width:' + bucket.width);
    console.log('\t height:' + bucket.height);
    console.log('\t host:' + bucket.host);

    var fileName = 'intro-' + bucket.x + '-' + bucket.y + '.png'
    bucket.save(fileName, function() {
        bucket.close();
    });
});

renderer.load('./intro.vrscene', function(err) {
    if (err) throw err;
    // Set the sampler type to adaptive(buckets).
    var sis = renderer.classes.SettingsImageSampler.getInstanceOrCreate();
    sis.type = 1;
    // Disable light cache so we get buckets straight away.
    var sgi = renderer.classes.SettingsGI.getInstanceOrCreate();
    sgi.secondary_engine = 2;
    renderer.start(function(err) {
        if (err) throw err;
        renderer.waitForRenderEnd(function() {
            renderer.close();
        });
    });
});

VII. Progressive Events

When rendering in Interactive mode or in Production with the image sampler set for progressive sampling (instead of buckets), the whole image is sampled at once and the progressiveImageUpdated event is emitted for each sampling pass with a more refined image. The purpose of Interactive mode is to allow users to receive fast feedback for the scene they have configured. Noisy images are quickly available when the Interactive render process starts.

int counter = 0;
void onImageUpdated(VRayRenderer& renderer, VRayImage* image
    , unsigned long long index, ImagePassType passType, double instant, void* userData) {
    char fileName[64];
    sprintf(fileName, "intro-%d.jpeg", ++counter);
    image->saveToJpeg(fileName);
}

int main() {
    VRayInit init(NULL, true);

    VRayRenderer renderer;
    renderer.setOnProgressiveImageUpdated(onImageUpdated);

    renderer.load("./intro.vrscene");
    renderer.startSync();
    renderer.waitForRenderEnd();
    return 0;
}
using (VRayRenderer renderer = new VRayRenderer())
{
    int counter = 0;
    renderer.ProgressiveImageUpdated += new EventHandler<VRayImageEventArgs>((source, e) =>
    {
        string fileName = string.Format("intro-{0}.jpeg", ++counter);
        e.Image.SaveToJPEG(fileName);
        e.Image.Dispose();
    });
    renderer.Load("./intro.vrscene");
    renderer.Start();
    renderer.WaitForRenderEnd();
}
counter = 0

def onImageUpdated(renderer, image, index, passType, instant):
    global counter
    counter += 1
    fileName = 'intro-{0}.jpeg'.format(counter)
    image.save(fileName)

with vray.VRayRenderer() as renderer:
    renderer.setOnProgressiveImageUpdated(onImageUpdated)

    renderer.load('./intro.vrscene')
    renderer.start()
    renderer.waitForRenderEnd()
var renderer = vray.VRayRenderer();

var counter = 0;
renderer.on('progressiveImageUpdated', function(image, index, passType, instant) {
    var fileName = 'intro-' + (++counter) + '.jpeg';
    image.save(fileName, function() {
        image.close();
    });
});

renderer.load('./intro.vrscene', function(err) {
    if (err) throw err;
    renderer.start(function(err) {
        if (err) throw err;
        renderer.waitForRenderEnd(function() {
            renderer.close();
        });
    });
});

VIII. V-Ray Images

The VRayImage class provides access to the images rendered by V-Ray. It is used by the VRayRenderer, as well as by some of the arguments of the callbacks invoked when an event occurs. The VRayImage class provides utility functions for manipulating the retrieved binary image data. The binary data is in full 32-bit float per channel format and can be accessed directly, but there are convenience methods that perform compression in several of the most popular 8-bit formats - BMP, JPEG, PNG. Saving EXR and other high dynamic range formats is also possible through another API: VRayRenderer.vfb.saveImage().

The methods that are exposed by the VRayImage class come in two flavors. The first group of methods directly returns the bytes of the compressed image, while the second group compresses the image and saves it to a file.

int main() {
    VRayInit init(NULL, true);

    VRayRenderer renderer;
    renderer.load("./intro.vrscene");
    renderer.startSync();
    renderer.waitForRenderEnd(6000);

    VRayImage* image = renderer.getImage();
    size_t imageSize;
    Png* png = image->compressToPng(imageSize);
    ofstream outputStream("intro_compressTo.png", ofstream::binary);
    outputStream.write((char*)png->getData(), imageSize);
    outputStream.close();
    delete png;
    bool res = image->saveToPng("intro_saveTo.png");
    delete image;
}
using (VRayRenderer renderer = new VRayRenderer())
{
    renderer.Load("./intro.vrscene");
    renderer.Start();
    renderer.WaitForRenderEnd(6000);

    using (VRayImage image = renderer.GetImage())
    {
        byte[] data = image.CompressToPNG();
        using (FileStream outStream = new FileStream("intro_compressTo.png", FileMode.Create, FileAccess.Write))
        {
            outStream.Write(data, 0, data.Length);
        }
        image.SaveToPNG("intro_saveTo.png");
    }
}
with vray.VRayRenderer() as renderer:
    renderer.load('./intro.vrscene')
    renderer.start()
    renderer.waitForRenderEnd(6000)

    image = renderer.getImage()
    data = image.compress(type='png')
    with open('intro_compressTo.png', 'wb') as outStream:
        outStream.write(data)
    image.save("intro_saveTo.png")
var fs = require('fs');
var renderer = vray.VRayRenderer();

renderer.load('./intro.vrscene', function(err) {
    if (err) throw err;
    renderer.start(function(err) {
        if (err) throw err;
        renderer.waitForRenderEnd(6000, function () {
            var image = renderer.getImage();
            image.compress('png', function (err, buffer) {
                if (err) throw err;
                fs.writeFile('intro_compressTo.png', buffer, function () {
                    image.save('intro_saveTo.png', function () {
                        renderer.close();
                    });
                });
            });
        });
    });
});

All instances of the VRayImage class must be closed so that memory resources held by the instance are released. It is recommended to always free the resources (close the image) when you have finished working with the image. The platforms that support a garbage collection mechanism will take care of freeing the internally held resources in the event that the user does not close the retrieved image.

Downscaling

In addition to the utility methods for compression, the VRayImage class supports downscale operations that resize the retrieved image to a smaller one. The result of the downscale operations is another VRayImage instance which has all the utility methods of the class.

int main() {
    VRayInit init(NULL, true);

    VRayRenderer renderer;
    renderer.load("./intro.vrscene");
    renderer.start();
    renderer.waitForRenderEnd(6000);

    // LocalVRayImage is a variation of VRayImage to be created on the stack, so that the image gets auto-deleted
    LocalVRayImage image = renderer.getImage();
    LocalVRayImage downscaled = image->getDownscaled(260, 180);
    downscaled->saveToPng("intro.png");
}
using (VRayRenderer renderer = new VRayRenderer())
{
    renderer.Load("./intro.vrscene");
    renderer.Start();
    renderer.WaitForRenderEnd(6000);

    using (VRayImage image = renderer.GetImage())
    {
        using (VRayImage downscaled = image.GetDownscaled(260, 180))
        {
            downscaled.SaveToPNG("intro.png");
        }
    }
}
with vray.VRayRenderer() as renderer:
    renderer.load('./intro.vrscene')
    renderer.start()
    renderer.waitForRenderEnd(6000)

    image = renderer.getImage()
    downscaled = image.getDownscaled(260, 180)
    downscaled.save('intro.png')
var renderer = vray.VRayRenderer();

renderer.load('./intro.vrscene', function (err) {
    if (err) throw err;
    renderer.start();

    renderer.waitForRenderEnd(6000, function () {
        var image = renderer.getImage();
        image.getDownscaled(260, 180, function(downscaled) {
            downscaled.save('intro.png', function() {
                downscaled.close();  // Not mandatory, can be left to the garbage collector
                image.close();       // Not mandatory, can be left to the garbage collector
                renderer.close();
            });
        });
    });
});

Changing the Image Size

The size of the rendered image can be controlled through the size property of the VRayRenderer class. When a “.vrscene” file is loaded the size defined in this scene file is used unless the VRayRenderer size is set after loading the file to override it.

int main() {
    VRayInit init;

    VRayRenderer renderer();
    renderer.load("./intro.vrscene");
    renderer.setImageSize(640, 360);
    renderer.start();
    renderer.waitForRenderEnd(6000);

    LocalVRayImage image = renderer.getImage();
    image->saveToPng("intro.png");
}
using (VRayRenderer renderer = new VRayRenderer())
{
    renderer.Load("./intro.vrscene");
    renderer.SetImageSize(640, 360);
    renderer.Start();
    renderer.WaitForRenderEnd(6000);

    using (VRayImage image = renderer.GetImage())
    {
        image.SaveToPNG("intro.png");
    }
}
with vray.VRayRenderer() as renderer:
    renderer.load('./intro.vrscene')
    renderer.size = (640, 360)
    renderer.start()
    renderer.waitForRenderEnd(6000)

    image = renderer.getImage()
    image.save('intro.png')
var renderer = vray.VRayRenderer();

renderer.load('./intro.vrscene', function(err) {
    if (err) throw err;
    renderer.size = {width: 640, height: 360};
    renderer.start();

    renderer.waitForRenderEnd(6000, function() {
        var image = renderer.getImage();
        image.save('intro.png', function() {
            image.close();
            renderer.close();
        });
    });
});

IX. Render Elements

V-Ray Render Elements (also known as Render Channels or Arbitrary Output Variables, AOVs) are images containing various types of render data encoded as 3-element color, single floats or integers. Some of them are Z-depth, surface normal, UV coordinates, velocity, lighting, reflections etc. Each V-Ray scene may contain an arbitrary number of render elements (also called channels). Each channel is enabled by a unique plugin, except for the RGB and Alpha channels, which are always enabled.

To access the render elements in the current scene, use the VRayRenderer instance where the scene is loaded. Each render element’s data can be taken either as a VRayImage, or as raw data (as byte, integer, or float buffers). Optionally, provide a sub-region of interest to the APIs to get that part of the data.

int main() {
    VRayInit init;

    // Do some rendering
    VRayRenderer renderer();
    renderer.setRenderMode(RenderMode::RENDER_MODE_PRODUCTION);
    renderer.load("cornell_new.vrscene");
    renderer.setImageSize(320, 200);

    // Access render elements via the VRayRenderer instance
    RenderElements renderElements = renderer.getRenderElements();

    // Direct lighting
    renderElements.add(RenderElement::LIGHTING, NULL, NULL);
    // Global illumination, indirect lighting
    renderElements.add(RenderElement::GI, NULL, NULL);

    renderer.start();

    // Here, we simply wait for the entire image to be completed before we access the render elements
    renderer.waitForRenderEnd();

    std::vector<RenderElement> allElements = renderElements.getAll(RenderElement::NONE);

    // List all available render elements and process each one
    for (int i = 0; i < allElements.size(); i++) {
        RenderElement re = allElements[i];
        // See enum BinaryFormat and enum PixelFormat for values
        printf("Channel %s, Format(%d), PixelFormat(%d)\n", re.getName().c_str(),
            re.getBinaryFormat(), re.getDefaultPixelFormat());

        // Output render element's data as an image
        // Optionally, specify an image sub-region, or leave blank to get the entire image
        LocalVRayImage img = re.getImage();
        img->saveToPng(re.getName() + ".png");

        // Similar to the image output, get the raw bytes
        // Again, a sub-region can be specified, or the entire data can be obtained if left blank
        void* data = NULL;
        GetDataOptions getOptions;
        int dataSize = re.getData(&data, getOptions);
        if (dataSize > 0) {
            ofstream datFile(re.getName() + ".dat", ios::out | ios::binary);
            datFile.write(reinterpret_cast<unsigned char*>(data), dataSize);
            datFile.close();
            RenderElement::releaseData(data);
        }
    }
}
using (VRayRenderer vr = new VRayRenderer())
{
    vr.RenderMode = RenderMode.Production;

    vr.Load("./cornell_new.vrscene");

    // Access render elements via the VRayRenderer instance
    RenderElements re = vr.RenderElements;

    // Direct lighting
    re.Add(RenderElementType.LIGHTING, "", "");
    // Global illumination, indirect lighting
    re.Add(RenderElementType.GI, "", "");

    // Do some rendering
    vr.Start();

    // Here, we wait for the entire image to be completed before we access the render elements
    vr.WaitForRenderEnd();

    // Render elements are available as plugins which are part of the scene and can be obtained in both production and interactive mode
    // Only "RGB" and "Alpha" are implicit
    var allRenderElements = re.GetAll();

    // List all available render elements and process each one
    foreach (RenderElement r in allRenderElements)
    {
        // See enum RenderElementFormat and enum RenderElementPixelFormat for values
        Console.WriteLine("{0}, Format({1}), PixelFormat({2})\n", r.Name, r.BinaryFormat, r.DefaultPixelFormat);

        // Output render element's data as an image
        // Optionally, specify an image sub-region, or leave blank to get the entire image
        VRayImage img = r.GetImage();
        img.SaveToPNG(true, r.Name + ".png");

        // Similar to the image output, get the raw bytes
        // Again, a sub-region can be specified, or the entire data can be obtained if left blank
        GetDataOptions getOptions;
        byte[] rawData = r.GetData(getOptions);

        // Do something with rawData...
    }
}
with vray.VRayRenderer() as renderer:
    renderer.renderMode = 'production'
    renderer.load('./cornell_new.vrscene')

    # Direct lighting
    renderer.renderElements.add('lighting')
    # Global illumination, indirect lighting
    renderer.renderElements.add('gi')

    renderer.start()
    # Access render elements via the VRayRenderer instance
    renderElements = renderer.renderElements.getAll()

    # Here, we simply wait for the entire image to be completed before we access
    # the render elements
    renderer.waitForRenderEnd()

    # List all available render elements and process each one
    for re in renderElements:
        print('{0}, Format({1}), PixelFormat({2})'.format(re.name, re.binaryFormat, re.defaultPixelFormat))

        # Output render element's data as an image
        # Optionally, specify an image sub-region, or leave blank to get the entire image
        img = re.getImage()
        img.save(re.name + '.png', preserveAlpha=True)

        # Similar to the image output, get the raw bytes
        # Again, a sub-region can be specified, or the entire data can be obtained if left blank
        rawData = re.getData()

        # Do something with rawData...
var r = vray.VRayRenderer();
r.renderMode = "Production";

r.load('./cornell_new.vrscene', function(err) {
    if (err) throw err;

    r.size = {width: 320, height: 200};

    // Direct lighting
    r.renderElements.add('lighting');
    // Global illumination, indirect lighting
    r.renderElements.add('gi');

    // Do some rendering
    r.start();

    // Here, we simply wait for the entire image to be completed before we access the render elements
    r.waitForRenderEnd(function() {
        // If we dont pass a specific type we get all render elements
        var allElements = r.renderElements.getAll();

        // List all available render elements and process each one
        for (var renderElement in allAlements) {
            console.log("Channel " + renderElement.name
                + ", Format(" + renderElement.binaryFormat
                + "), PixelFormat(" + renderElement.defaultPixelFormat + ")");

            // Output render element's data as an image
            // Optionally, specify an image sub-region, or leave blank to get the entire image
            var reImage = renderElement.getImage();
            reImage.saveSync(renderElement.name + ".png");

            // Similar to the image output, get the raw bytes
            // Again, a sub-region can be specified, or the entire data can be obtained if left blank
            var rawData = renderElement.getData();
            console.log(rawData);
        }

        r.close();
    });
});

X. Plugins

Plugins are the objects that specify the lights, geometry, materials or settings that constitute the 3D scene. Each V-Ray scene consists of a set of plugins instances. The V-Ray Application SDK exposes methods in the main VRayRenderer class that can be used to create plugin instances or list and modify (or delete) the existing ones in the scene.

Plugin objects can be retrieved by instance name. Once the user has obtained a plugin instance, its property (a.k.a. parameter) values can be viewed or set to affect the rendered image. The properties of the plugin are accessed by name. In Interactive mode the changes to plugin property values are usually applied immediately during rendering and changes become visible almost instantly, but for each change the image sampling is reset and you get some noisy images initially. In Production mode changes to plugin property values take effect if they are applied before the rendering process is started. The following example demonstrates how the transform for the render view (camera) in a scene can be changed with the AppSDK.

#include "vraysdk.hpp"
// The vrayplugins header provices specialized classes for concrete types of plugins, deriving from the generic Plugin base class.
// It is optional and the generic Plugin API can be used instead.
#include "vrayplugins.hpp"
int main() {
    VRayInit init(NULL, true);
    VRayRenderer renderer;
    renderer.load("cornell_new.vrscene");

    // find the RenderView plugin in the scene
    RenderView renderView = renderer.getPlugin<RenderView>("renderView");
    // change the transform value
    Transform newTransform = renderView.getTransform("transform");
    newTransform.offset += Vector(-170, 120, newTransform.offset.z);
    renderView.set_transform(newTransform);

    renderer.startSync();
    renderer.waitForRenderEnd(6000);

    return 0;
}
using VRay;
using VRay.Plugins;

using (VRayRenderer renderer = new VRayRenderer())
{
    renderer.Load("cornell_new.vrscene");

    // find the RenderView plugin in the scene
    RenderView renderView = renderer.GetPlugin<RenderView>("renderView");
    // change the transform value
    Transform newTransform = renderView.Transform;
    renderView.Transform = newTransform.ReplaceOffset(newTransform.Offset + new Vector(-170, 120, newTransform.Offset.Z));

    renderer.StartSync();
    renderer.WaitForRenderEnd(6000);
}
with vray.VRayRenderer() as renderer:
    renderer.load('cornell_new.vrscene')

    # find the RenderView plugin in the scene
    renderView = renderer.plugins.renderView
    # change the transform value
    newTransform = renderView.transform
    newTransform = newTransform.replaceOffset(newTransform.offset + vray.Vector(-170, 120, newTransform.offset.z))
    renderView.transform = newTransform

    renderer.startSync()
    renderer.waitForRenderEnd(6000)
var renderer = vray.VRayRenderer();

renderer.load('cornell_new.vrscene', function(err) {
    if (err) throw err;

    // find the RenderView plugin in the scene
    var renderView = renderer.plugins.renderView;
    // change the transform value
    var newTransform = renderView.transform;
    newTransform = newTransform.replaceOffset(newTransform.offset.add(vray.Vector(-170, 120, newTransform.offset.z)));
    renderView.transform = newTransform;
    // Start rendering.
    renderer.start(function (err) {
        if (err) throw err;
        renderer.waitForRenderEnd(6000, function() {
            renderer.close();
        });
    });
});

The call to a method that retrieves the value for a plugin property always returns a copy of the internal value stored in the local V-Ray engine. In order to change the plugin property value so that it affects the rendered image the setter method from the plugin class should be called. Simply modifying the value returned by a getter will not lead to changes in the scene as it is a copy, not a reference.

Adding and removing plugins

Plugin instances can be created and removed dynamically with the AppSDK. The next example demonstrates how a new light can be added to the scene.

#include "vraysdk.hpp"
#include "vrayplugins.hpp"

int main() {
    VRayInit init(NULL, true);
    VRayRenderer renderer;
    renderer.load("cornell_new.vrscene");

    // create a new light plugin
    LightOmni lightOmni = renderer.newPlugin<LightOmni>("lightOmniBlue");
    lightOmni.set_color(AColor(0.f, 0.f, 1.f));
    lightOmni.set_intensity(60000.f);
    lightOmni.set_decay(2.0f);
    lightOmni.set_shadowRadius(40.0f);
    lightOmni.set_transform(Transform(
                   Matrix(Vector(1.0, 0.0, 0.0),
                          Vector(0.0, 0.0, 1.0),
                          Vector(0.0, -1.0, 0.0)),
                   Vector(-50, 50, 50)));

    renderer.startSync();
    renderer.waitForRenderEnd(6000);
    return 0;
}
using VRay;
using VRay.Plugins;

using (VRayRenderer renderer = new VRayRenderer())
{
    renderer.Load("cornell_new.vrscene");

    // create a new light plugin
    LightOmni lightOmni = renderer.NewPlugin<LightOmni>("lightOmniBlue");
    lightOmni.Color = new AColor(0, 0, 1);
    lightOmni.Intensity = 60000.0f;
    lightOmni.Decay = 2.0f;
    lightOmni.ShadowRadius = 40.0f;
    lightOmni.Transform = new Transform(
                   new Matrix(new Vector(1.0, 0.0, 0.0),
                              new Vector(0.0, 0.0, 1.0),
                              new Vector(0.0, -1.0, 0.0)),
                   new Vector(-50, 50, 50);

    renderer.StartSync();
    renderer.WaitForRenderEnd(6000);
}
}
with vray.VRayRenderer() as renderer:
    renderer.load('./cornell_new.vrscene')

    # create a new light plugin
    lightOmni = renderer.classes.LightOmni('lightOmniBlue')
    lightOmni.color = vray.Color(0, 0, 1)
    lightOmni.intensity = 60000.0
    lightOmni.decay = 2.0
    lightOmni.shadowRadius = 40.0
    lightOmni.transform = vray.Transform(
                   vray.Matrix(vray.Vector(1.0, 0.0, 0.0),
                               vray.Vector(0.0, 0.0, 1.0),
                               vray.Vector(0.0, -1.0, 0.0)),
                   vray.Vector(160, -30, 200))

    renderer.start()

    renderer.waitForRenderEnd(6000)
var renderer = vray.VRayRenderer();

renderer.load('./cornell_new.vrscene', function(err) {
    if (err) throw err;

    // create a new light plugin
    var lightOmni = renderer.classes.LightOmni('lightOmniBlue');
    lightOmni.color = Color(0, 0, 1);
    lightOmni.intensity = 60000.0;
    lightOmni.decay = 2.0;
    lightOmni.shadowRadius = 40.0;
    lightOmni.transform = vray.Transform(
                        vray.Matrix(vray.Vector(1.0, 0.0, 0.0),
                                    vray.Vector(0.0, 0.0, 1.0),
                                    vray.Vector(0.0, -1.0, 0.0)),
                        vray.Vector(160, -30, 200));

    renderer.start();

    renderer.waitForRenderEnd(6000, function() {
        renderer.close();
    });
});

In the following example we remove a sphere plugin by name:

int main() {
    VRayInit init;

    VRayRenderer renderer;
    renderer.load("./cornell_new.vrscene");

    Plugin sphere = renderer.getPlugin("Sphere0Shape4@node");
    renderer.deletePlugin(sphere);

    renderer.startSync();
    renderer.waitForRenderEnd(6000);
}
using (VRayRenderer renderer = new VRayRenderer())
{
    renderer.Load("./cornell_new.vrscene");

    Plugin sphere = renderer.GetPlugin("Sphere0Shape4@node");
    renderer.DeletePlugin(sphere);

    renderer.Start();
    renderer.WaitForRenderEnd(6000);
}
with vray.VRayRenderer() as renderer:
    renderer.load('./cornell_new.vrscene')

    del renderer.plugins['Sphere0Shape4@node']

    renderer.start()
    renderer.waitForRenderEnd(6000)
var renderer = vray.VRayRenderer();

renderer.load('./cornell_new.vrscene', function(err) {
    if (err) throw err;

    delete r.plugins['Sphere0Shape4@node'];

    renderer.start();
    renderer.waitForRenderEnd(6000, function() {
        renderer.close();
    });
});

Automatic commit of property changes

All changes made to plugin properties should be done before the rendering starts when rendering in Production mode. Any changes after that do not affect the current rendering, but they are not lost. In Interactive mode the changes made after the rendering starts are applied immediately to the scene by default. Alternatively, the user can decide to make a batch of changes to be applied together, usually for better performance. This is done using the autoCommit property of the VRayRenderer class. In the following example we demonstrate how a group of changes are batched and commit is called explicitly to apply them.

#include "vraysdk.hpp"
#include "vrayplugins.hpp"

int main() {
    VRayInit init(NULL, true);
    VRayRenderer renderer;
    renderer.setAutoCommit(false);
    renderer.load("cornell_new.vrscene");

    // This change won't be applied immediately
    LightOmni lightOmni = renderer.newPlugin<LightOmni>();
    lightOmni.set_color(Color(0.f, 0.f, 1.f));
    lightOmni.set_intensity(60000.0f);
    lightOmni.set_decay(2.0f);
    lightOmni.set_shadowRadius(40.0f);
    lightOmni.set_transform(Transform(
                        Matrix(Vector(1.0, 0.0, 0.0),
                               Vector(0.0, 0.0, 1.0),
                               Vector(0.0, -1.0, 0.0)),
                        Vector(-50, 50, 50)));

    renderer.startSync();
    renderer.waitForRenderEnd(2000);

    // Make a group of changes 2 seconds after the render starts
    Plugin sphere = renderer.getPlugin("Sphere0Shape4@node");
    renderer.deletePlugin(sphere);
    RenderView renderView = renderer.getPlugin<RenderView>("renderView");
    renderView.set_fov(1.5f);
    // Commit applies all 3 changes
    renderer.commit();

    renderer.waitForRenderEnd(4000);
    return 0;
}
using (VRayRenderer renderer = new VRayRenderer())
{
    renderer.AutoCommit = false;

    renderer.Load("./cornell_new.vrscene");

    // This change won't be applied immediatelly
    LightOmni lightOmni = renderer.NewPlugin<LightOmni>("lightOmniBlue");
    lightOmni.Color = new Color(0, 0, 1);
    lightOmni.Intensity = 60000.0f;
    lightOmni.Decay = 2.0f;
    lightOmni.ShadowRadius = 40.0f;
    lightOmni.Transform = new Transform(
                   new Matrix(new Vector(1.0, 0.0, 0.0),
                              new Vector(0.0, 0.0, 1.0),
                              new Vector(0.0, -1.0, 0.0)),
                   new Vector(160, -30, 200)
    ));

    renderer.StartSync();

    renderer.WaitForRenderEnd(2000);

    // Make a group of changes 2 seconds after the render starts
    Plugin sphere = renderer.GetPlugin("Sphere0Shape4@node");
    renderer.DeletePlugin(sphere);

    RenderView renderView = renderer.GetPlugin<RenderView>("renderView");
    renderView.Fov = 1.5f;

    // Commit applies all 3 changes
    renderer.Commit();
    renderer.WaitForRenderEnd(4000);
}
with VRayRenderer() as r:
    r.autoCommit = False
    r.load('./cornell_new.vrscene')

    r.startSync()

    # create a new light plugin
    # This change won't be applied immediatelly
    lightOmni = renderer.classes.LightOmni()

    lightOmni.color = Color(0, 0, 1)
    lightOmni.intensity = 60000.0
    lightOmni.decay = 2.0
    lightOmni.shadowRadius = 40.0
    lightOmni.transform = Transform(
                    Matrix(Vector(1.0, 0.0, 0.0),
                           Vector(0.0, 0.0, 1.0),
                           Vector(0.0, -1.0, 0.0)),
                    Vector(160, -30, 200))

    r.waitForRenderEnd(2000)

    # Make a group of changes 2 seconds after the render starts
    r.plugins.renderView.fov = 1.5
    del r.plugins['Sphere0Shape4@node']

    # Commit applies all 3 changes
    r.commit()

    r.waitForRenderEnd(4000)
var renderer = vray.VRayRenderer();
renderer.autoCommit = false;

renderer.load('./cornell_new.vrscene', function(err) {
    if (err) throw err;

    renderer.startSync();

    // This change won't be applied immediatelly
    var lightOmni = renderer.classes.LightOmni();
    lightOmni.color = Color(0, 0, 1);
    lightOmni.intensity = 60000.0;
    lightOmni.decay = 2.0;
    lightOmni.shadowRadius = 40.0;
    lightOmni.transform = Transform(
                   Matrix(Vector(1.0, 0.0, 0.0),
                          Vector(0.0, 0.0, 1.0),
                          Vector(0.0, -1.0, 0.0)),
                   Vector(160, -30, 200));

    // Make a group of changes 2 seconds after the render starts
    setTimeout(function() {
        renderer.plugins.renderView.fov = 1.5;

        delete renderer.plugins['Sphere0Shape4@node']

        // Commit applies all 3 changes
        renderer.commit();
    }, 2000);

    renderer.waitForRenderEnd(6000, function() {
        renderer.close();
    });
});

XI. Property Types

The following are the types recognized for property types:

Detailed documentation about working with each type can be found in the API docs as they have some language nuances. All types are in the VRay namespace.

XII. Animations

V-Ray AppSDK supports rendering of animated scenes. Animated scenes contain animated plugin properties. A plugin property is considered animated if it has a sequence of values - each for a specific time or frame number. This defines key frames and in the process of sampling, interpolation is done between the values.

One can obtain the current frame number from the frame property of the renderer and the current time from the time property. The renderer gets into the IDLE_FRAME_DONE intermediate state when a frame is done and there are more frames after it. To continue rendering the next frame, the continueSequence method of the renderer must be called. After the final frame it becomes IDLE_DONE.

void onStateChanged(VRayRenderer& renderer, RendererState oldState, RendererState newState, double instant, void* userData)
{
    printf("State changed from %d to %d\n", oldState, newState);
    if (newState == IDLE_FRAME_DONE) {
        // If the sequence has NOT finished - continue with next frame.
        renderer.continueSequence();
    }
}

int main() {
    VRayInit init;

    VRayRenderer renderer;
    renderer.setOnStateChanged(onStateChanged);

    renderer.load("./animation.vrscene");
    SubSequenceDesc subSequenceDescriptions[1];
    subSequenceDescriptions[0].start = 1;
    subSequenceDescriptions[0].end = 20;
    subSequenceDescriptions[0].step = 1;
    renderer.renderSequence(subSequenceDescriptions, 1);
    renderer.waitForSequenceDone();
    return 0;
}
using (renderer = new VRayRenderer())
{
    renderer.StateChanged += new EventHandler((source, e) =>
    {
        Console.WriteLine("State changed from {0} to {1}", e.OldState, e.NewState);
        if (e.NewState == VRayRenderer.RendererState.IDLE_FRAME_DONE)
        {
            // If the sequence has NOT finished - continue with next frame.
            renderer.ContinueSequence();
        }
    });

    renderer.Load("animation.vrscene");

    // SubSequenceDesc(startFrame, endFrame, step) - the latter two parameters are optional.
    // By default endFrame is the last frame and step is 1, so it could be just a single frame and the implicit step is 1.
    // The range can be inverted and the separate ranges need not be sorted.
    // All these options are seen in the example array.
    SubSequenceDesc[] seq = { new SubSequenceDesc(3, 5), new SubSequenceDesc(1),
        new SubSequenceDesc(2), new SubSequenceDesc(10, 6, 2) };
    renderer.RenderSequence(seq);
    renderer.WaitForSequenceDone();
}
def onStateChanged(renderer, oldState, newState, instant):
    print('State changed from', oldState, 'to', newState)
    if newState == vray.RENDERER_STATE_IDLE_FRAME_DONE:
        # If the sequence has NOT finished - continue with next frame.
        renderer.continueSequence()

with vray.VRayRenderer() as renderer:
    renderer.setOnStateChanged(onStateChanged)

    renderer.load('./animation.vrscene')

    # When a single number is passed, it's interpreted as "start", with "end" being the last frame and step=1
    sequence = [9, {'start': 4, 'end': 5}, {'start': 1, 'end': 5, 'step': 2}]
    renderer.renderSequence(sequence)
    renderer.waitForSequenceDone()
var renderer = vray.VRayRenderer();

renderer.on('stateChanged', function(oldState, newState, instant) {
    console.log('State changed from ' + oldState + ' to ' + newState);
    if (newState == 'idleFrameDone') {
        // If the sequence has NOT finished - continue with next frame.
        renderer.continueSequence();
    }
});

renderer.load('./animation.vrscene', function (err) {
    if (err) {
        renderer.close();
        throw err;
    }

    // renderSequence() arguments can be in an array but it's not obligatory
    // simple example: renderSequence({ start: 3, end: 8})
    // When a single number is passed, it's interpreted as "start", with "end" being the last frame and step=1
    renderer.renderSequence([{ start: 4, end: 5}, 1, 3, 2,
                             { start: 10, end: 1, step: -2},
                             5,
                             { start: 2, end: 6}]);
    renderer.waitForSequenceDone()
    renderer.close()
});

XIII. V-Ray Server

A V-Ray Server is used as a remote render host during the rendering of a scene. The server is a process that listens for render requests on a specific network port and clients such as instances of the VRayRenderer class can connect to the server to delegate part of the image rendering. V-Ray Standalone render servers can also be used. The server cannot be used on its own to start a rendering process but plays an essential role when completing distributed tasks initiated by clients. In order to take advantage of the distributed rendering capabilities of V-Ray, such servers need to be run and be available for requests.

The V-Ray Application SDK allows for easily creating V-Ray server processes that can be used in distributed rendering. The API includes the VRayServer class that exposes methods for starting and stopping server processes. In addition the VRayServer class enables users to subscribe to V-Ray server specific events so that custom logic can be executed upon their occurrence.

Server events occur when:

#include "vraysrv.hpp"

using namespace VRay;
using namespace std;

void printStarted(VRayServer &server, double instant, void* userData) {
    printf("Server started.\n");
}

void printRendererConnect(VRayServer &server, const char* host, double instant, void* userData) {
    printf("Host %s connected to server.\n", host);
}

void printRendererDisconnect(VRayServer &server, const char* host, double instant, void* userData) {
    printf("Host %s disconnected to server.\n", host);
}

int main() {
    VRayServerInit init(NULL);

    ServerOptions options;
    // Listening port for VRay server. The default value is 20207.
    options.portNumber = 20207;

    // Create an instance of VRayServer with custom options.
    // The server is automatically closed at the end of the current scope.
    VRayServer server(options);

    // Add a listener for server start event. It is invoked when the image has finished rendering.
    server.setOnServerStart(printStarted);

    // Add a listener for connect event. It is invoked when a renderer connects to the server.
    server.setOnConnect(printRendererConnect);

    // Add a listener for disconnect event. It is invoked when a renderer disconnects from the server.
    server.setOnDisconnect(printRendererDisconnect);

    // Start listening for connections on the specified port.
    server.run();

    return 0;
}
ServerOptions options = new ServerOptions();

// Listening port for VRay server. The default value is 20207.
options.PortNumber = 20207;

// Create an instance of VRayServer with custom options.
// The server is automatically closed after the `using` block.
using (VRayServer server = new VRayServer(options))
{
    // Add a listener for server start event. It is invoked when the image has finished rendering.
    server.Started += new EventHandler((source, e) =>
    {
        Console.WriteLine("Server started. Press Enter to continue...");
    });

    // Add a listener for connect event. It is invoked when a renderer connects to the server.
    server.HostConnected += new EventHandler<HostEventArgs>((source, e) =>
    {
        Console.WriteLine(String.Format("Host {0} connected to server.", e.Host));
    });

    // Add a listener for disconnect event. It is invoked when a renderer disconnects from the server.
    server.HostDisconnected += new EventHandler<HostEventArgs>((source, e) =>
    {
        Console.WriteLine(String.Format("Host {0} disconnected to server.", e.Host));
    });

    // Start listening for connections on the specified port.
    server.Run();
}
import vray

def printStarted(server, instant):
    print('Server started. Press Enter to continue...')

def printRendererConnect(server, host, instant):
    print('Host {0} connected to server.'.format(host))

def printRendererDisconnect(server, host, instant):
    print('Host {0} disconnected from server.'.format(host))


# Listening port for VRay server. The default value is 20207
options = { 'portNumber': 20207 }

# Create an instance of VRayServer with custom options.
# The server is automatically closed after the `with` block.
with vray.VRayServer(**options) as server:
    # Add a listener for server start event. It is invoked when the image has finished rendering.
    server.setOnStart(printStarted)

    # Add a listener for connect event. It is invoked when a renderer connects to the server.
    server.setOnConnect(printRendererConnect)

    # Add a listener for disconnect event. It is invoked when a renderer disconnects from the server.
    server.setOnDisconnect(printRendererDisconnect)

    # Start listening for connections on the specified port.
    server.run()
// Listening port for VRay server. The default value is 20207.
options = { 'portNumber': 20207 };

// Create an instance of VRayServer with custom options.
var server = vray.VRayServer(options);

// Add a listener for server start event. It is invoked when the image has finished rendering.
server.on('start', function(instant) {
    console.log('Server is ready');
    console.log('Press any key to exit');
});

// Add a listener for connect event. It is invoked when a renderer connects to the server.
server.on('connect', function(host, instant) {
    console.log('Host ' + host + ' connected to server.');
});

// Add a listener for disconnect event. It is invoked when a renderer disconnects from the server.
server.on('disconnect', function(host, instant) {
    console.log('Host ' + host + ' disconnected from server.');
});

// Start listening for connections on the specified port.
server.start();

XIV. Distributed Rendering

The V-Ray Application SDK allows third party integrators to make full use of the V-Ray distributed rendering engine. Distributed rendering allows you to render parts of the image in parallel on multiple render hosts. The client process which initiates the rendering synchronizes the results produced by the render servers (also known as render slaves). All render modes support distributed rendering.

Render hosts can be dynamically added and removed at any point in time during the rendering process. The render hosts should have a V-Ray server running on them when the render client tries to add them to its list of active hosts because the client verifies whether a connection can be established for a distributed task at that time.

Each render host is identified by an address:port pair specifying the IP or DNS address of the machine with the V-Ray server and the port on which this server is listening for render requests. The VRayRenderer class exposes methods for dynamic addition and removal of hosts. Regardless of the progress made by the main render process in the client, remote machines can successfully begin to participate in the rendering as long as the they have running V-Ray servers.

Removal of render hosts is just as easy as adding them. Even if all remote render hosts disconnect or are deleted from the rendering host list, the rendering process will continue on the client machine which has initiated it.

void printDRStatus(VRayRenderer &renderer) {
    printf("Active hosts: %s\n", renderer.getActiveHosts().c_str());
    printf("Inactive hosts: %s\n", renderer.getInactiveHosts().c_str());
    printf("All hosts: %s\n", renderer.getAllHosts().c_str());
}

int main() {
    VRayInit init(NULL, true);
    VRayRenderer renderer;

    // Load scene from a file.
    renderer.load("cornell_new.vrscene");
    // We have to explicitly enable DR first
    renderer.setDREnabled(true);

    // Start rendering.
    renderer.startSync();
    // Attach several hosts after a few seconds
    renderer.waitForRenderEnd(6000);

    // If port is omitted it defaults to 20207.
    // NOTE: Replace this address with the address of a host running VRayServer.
    renderer.addHosts("10.0.0.132:20207;10.0.0.132:20208");

    printf("=== After adding hosts ===\n");
    printDRStatus(renderer);

    // Wait until rendering has finished or 6s have elapsed.
    renderer.waitForRenderEnd(6000);

    // Remove a remote host from distributed rendering.
    // NOTE: Replace this address with the address of a host already added for distributed rendering.
    renderer.removeHosts("10.0.0.132:20208");

    printf("=== After removing a host ===\n");
    printDRStatus(renderer);

    return 0;
}
static void PrintDRStatus(VRayRenderer renderer)
{
    Console.WriteLine("Active hosts: ", renderer.ActiveHosts);
    Console.WriteLine("Inactive hosts: ", renderer.InactiveHosts);
    Console.WriteLine("All hosts: ", renderer.AllHosts);
}

static void Main(string[] args)
{
    using (VRayRenderer renderer = new VRayRenderer())
    {
        // Load scene from a file.
        renderer.Load("cornell_new.vrscene");
        // We have to explicitly enable DR first
        renderer.SetDREnabled(true);

        // Start rendering.
        renderer.StartSync();
        // Attach several hosts after a few seconds
        renderer.WaitForRenderEnd(6000);

        // If port is omitted it defaults to 20207.
        // NOTE: Replace this address with the address of a host running VRayServer.
        renderer.AddHosts("10.0.0.132:20207;10.0.0.132:20208");

        Console.WriteLine("=== After adding hosts ===");
        PrintDRStatus(renderer);

        // Wait until rendering has finished or 6s have elapsed.
        renderer.WaitForRenderEnd(6000);
        // Remove a remote host from distributed rendering.
        // NOTE: Replace this address with the address of a host already added for distributed rendering.
        renderer.RemoveHosts("10.0.0.132:20208");

        Console.WriteLine("=== After removing a host ===");
        PrintDRStatus(renderer);
    }
}
def printDRStatus(renderer):
    print('Active hosts: ', renderer.getActiveHosts())
    print('Inactive hosts: ', renderer.getInactiveHosts())
    print('All hosts: ', renderer.getAllHosts())

with vray.VRayRenderer() as renderer:
    # Load scene from a file.
    renderer.load('cornell_new.vrscene')
    # We have to explicitly enable DR first
    renderer.setDREnabled(True)

    # Start rendering.
    renderer.startSync()
    # Attach several hosts after a few seconds
    renderer.waitForRenderEnd(6000)

    # If port is omitted it defaults to 20207.
    # NOTE: Replace this address with the address of a host running VRayServer.
    renderer.addHosts('10.0.0.132:20207;10.0.0.132:20208')

    print('=== After adding hosts ===')
    printDRStatus(renderer)

    # Wait until rendering has finished or 6s have elapsed.
    renderer.waitForRenderEnd(6000)

    # Remove a remote host from distributed rendering.
    # NOTE: Replace this address with the address of a host already added for distributed rendering.
    renderer.removeHosts('10.0.0.132:20208)

    print('=== After removing a host ===')
    printDRStatus(renderer)
function printDRStatus(renderer) {
    console.log('Active hosts: ', renderer.getActiveHostsSync());
    console.log('Inactive hosts: ', renderer.getInactiveHostsSync());
    console.log('All hosts: ', renderer.getAllHostsSync());
}

var renderer = vray.VRayRenderer();

// Load scene from a file asynchronously.
renderer.load('cornell_new.vrscene', function(err) {
    if (err) throw err;
    // We have to explicitly enable DR first
    renderer.setDREnabled(true);

    // Start rendering.
    renderer.start(function(err) {
        if (err) throw err;
        // Attach several hosts after a few seconds
        renderer.waitForRenderEnd(6000, function() {
            // If port is omitted it defaults to 20207.
            // NOTE: Replace this address with the address of a host running VRayServer.
            renderer.addHosts('10.0.0.132:20207;10.0.0.132:20208', function() {
                console.log('=== After adding hosts ===');
                printDRStatus(renderer);

                // Wait until rendering has finished or 6s have elapsed.
                renderer.waitForRenderEnd(6000, function() {
                    // Remove a remote host from distributed rendering.
                    // NOTE: Replace this address with the address of a host already added for distributed rendering.
                    renderer.removeHosts('10.0.0.132:20208', function() {
                        console.log('=== After removing a host ===');
                        printDRStatus(renderer);
                        renderer.close();
                    });
                });
            });
        });
    });
});