You can change the viewpoint (within the limits of the original camera grid) and refocus the image in real-time by clicking on the image. One of the first movies that showed a practical use for light fields is The Matrix from 1999, where an array of cameras firing at the same time (or in rapid succession) made it possible to pan around an actor to create a super slow motion effect ("bullet time"): During runtime, entirely new viewpoints are created by interpolating those viewpoints on-the-fly resulting in photoreal reflections and lighting in real-time (http://www.roadtovr.com/googles-seurat-surface-light-field-tech-graphical-breakthrough-mobile-vr/). A depth map is also created at the same time an enables on-the-fly refocusing of the image, by interpolating information from several images, Light field rendering in itself is not a new technique and has actually been around for more than 20 years, but has only recently become a viable rendering technique. The following video demonstrates light fields rendered with Cycles: The first paper was released at Siggraph 1996 (" Light field rendering " by Mark Levoy and Pat Hanrahan) and the method has since been incrementally improved by others. This week Google announced "Seurat", a novel surface lightfield rendering technology which would enable "real-time cinema-quality, photorealistic graphics" on mobile VR devices, developed in collaboration with ILMxLab: Combine this with a cube map/ photon map and some sort of cube map with multilayered depth buffers and you could make a sort of 3d version of this. Visualization and Image Enhancement for Multistatic Underwater Laser Line Scan System Using Image-Based RenderingVisualization for multi-static underwater LLS system using Image Based RenderingBing Ouyang, Fraser R. Dalgleish, … Benjamin MetzgerVisualization for multi-static underwater LLS system using Image Based RenderingVisualization and Image Enhancement for Multistatic Underwater Laser Line Scan System Using Image-Based RenderingRendering with radiance - the art and science of lighting visualizationLazy Decompression of Surface Light Fields for Precomputed Global IlluminationAdaptive Acquisition of Lumigraphs from Synthetic ScenesIllumination networks: fast realistic rendering with general reflectance functionsPolyhedral Geometry and the Two-Plane ParameterizationBy clicking accept or continuing to use the site, you agree to the terms outlined in our Corpus ID: 16176062. The 289 renders are then highly compressed (for this scene, the 107 MB large batch of 289 images was compressed down to only 3 MB!). Good results can be obtained with a grid of 17 by 17 cameras with a distance of 10 cm between neighbouring cameras. 2) The second tool is the light field encoder and WebGL based light field viewer, created by Michal Polko, found at If you're interested in a crash course on light fields, check out Joan Charmant's video tutorial below, which explains the basics of implementing a light field renderer: At almost the same time, Disney released a paper called " Light field rendering in itself is not a new technique and has actually been around for more than 20 years, but has only recently become a viable rendering technique. The first paper was released at Siggraph 1996 (" Tutorial on Light Field Rendering @inproceedings{Slater2000TutorialOL, title={Tutorial on Light Field Rendering}, author={Mel Slater}, year={2000} } Or something like enlightens lighting probes except there cube maps. The light field was created by rendering the scene from 289 (17x17) different camera viewpoints, which took about 6 minutes in total (about 1 to 2 seconds rendertime per 512x512 image on a good GPU). A later tutorial will add a bit more freedom to the camera, allowing for rotation and zooming.Quasi-random, more or less unbiased blog about real-time photorealistic GPU renderingA live online demo of the light field with the dragon can be seen here: 1) The first one is the light field camera grid add-on for Blender made by Katrin Honauer and Ole Johanssen from the Heidelberg University in Germany: For high quality, a 33-by-33 camera grid with an inter-camera distance of 5 cm is recommended. This plug-in sets up a camera grid in Blender and renders the scene from each camera using the Cycles path tracing engine. Instead of attempting to explain the theory behind light fields (for which there are plenty of excellent online sources), the main focus of this post is to show how to quickly get started with rendering a synthetic light field using Blender Cycles and some open-source plug-ins. The technology captures all light rays in a scene by pre-rendering it from many different viewpoints. Some features of the site may not work correctly.No part of these notes may be reproduced without written permission of the author. You are currently offline. I rendered the Stanford dragon using a 17 by 17 camera grid and distance of 5 cm between adjacent cameras.