Lecture series 1
by Prof. Pier Luigi Dragotti, Imperial College, London, UK.
Part 1: Structure of multi-view images
1.1 Notion of Plenoptic Function (PF): Image based rendering vs Model based rendering, the plenoptic function, the 7 dimensions of the plenoptic function.
1.2 EPI and lightfield parameterization, slope of the EPI line to infer occlusion ordering
Part 2: Spectral Analysis of the PF
2.1 Spectrum of PF: proof that the PF spectrum is approximately bandlimited, exact characterization of the spectrum of the PF of a slanted plane
2.2 Uniform Sampling of PF: Shannon-type derivation of the camera spacing required to reconstruct the PF from a finite number of images.
2.3 Adaptive Sampling of PF: strategies to optimize the location of the cameras according to the local complexity of the scene.
Part 3: Layer-based representation of PF: From plenoptic sampling to layer-based representations, layer-extraction algorithms
3.1 layer-based representation for IBR: view synthesis using layer-based representations, numerical results
3.2 Layer-based Representation for enhancement: denoising of multi-view images using layers
Part 4: Compression
4.1 Basics: lossless vs lossy compression, quantization, bit allocation, transform coding
4.2 Overview of Image Compression: wavelets vs Discrete Cosine Transform (DCT)
4.3 Compression of the Lightfield: view compensated multi-dimensional wavelet transform, shape compression, bit allocation between textures and shapes, numerical results.
Lecture series 2
by Prof. Manuel Martinez Corral, University of Valencia, Spain.
Part 1: Capture stage in Integral Photography
Influence of geometrical parameters.
Influence of difractive parameters.
Resolution, depth of field and viewing angle.
Part 2: Display stage in Integral Photography
DPIP versus RPIP.
Pseudoscopic to orthoscopic algorithms.
Facet braiding and other effects.
Part 3: From Plenoptic Photography to Integral Photography
Algorithms for display.
Algorithms for reconstruction.
Resolution of reconstructed images.
Part 4: Design of Capture Rig and Display Monitor
Lecture series 3
by Dr. Christian Perwass, Raytrix, Germany.
In the first lecture you will see an introduction to the multi-focus plenoptic camera with an overview of its working principle. I will also present numerous examples from application areas as diverse as microscopy, photography, optical inspection, volumetric velocimitry, plant inspection and gesture recognition.
The second lecture discusses many aspects of the multi-focus plenoptic camera in more detail. At this point you will finally see some formulas. After this lecture you should be able to understand the design constraints of a plenoptic camera and how it should be designed for particular applications.
You will get some hands-on experience with Raytrix plenoptic cameras. We will also do some mathematical derivations and may also find time to do some simple programming around a basic plenoptic rendering algorithm.
Lecture series 4
by Prof. Andrew Lumsdaine, Indiana University, Bloomington, USA.
Part 1: Background/Review
We recapitulate basics of plenoptic capture and representation and lay the foundation for subsequent lectures on Lytro, GPU computing, and Fourier slice refocusing. Using phase-space representation of the plenoptic function, we compare and constrast plenoptic 1.0 and plenoptic 2.0, particularly with respect to resolution. Multimode capture capabilities of plenoptic cameras are described, including HDR, polarization, and multispectral capture.
Part 2: Reverse engineering the Lytro camera
The Lytro camera was released in October of 2011 as the first consumer-oriented plenoptic camera. In this lecture we take an in-depth look into the design and capabilities of the Lytro camera, its software, and its file formats. We develop some basic approaches for reading and manipulating Lytro data.
Part 3: GPU computing with plenoptic data
We review basic computational approaches for manipulating and rendering plenoptic data including focusing and refocusing, changing points of view, and changing depth of field. We develop GPU based implementations for these computations using OpenGL and the GL shader language. Students will have the opportunity to develop and explore their own algorithms during the exercise session.
Part 4: Fourier slice refocusing
The original Fourier-slice refocusing algorithm of Ren Ng is developed. We begin by characterizing plenoptic imagery and deriving basic optical transformations in the frequency domain. We briefly show how different cameras can be interpreted in the frequency domain, including heterodying cameras. Using the frequency domain representation of the plenoptic function, we relate rendering in the spatial domain to slicing in the frequency domain and show how slicing at different angles corresponds to focusing at different planes. We conclude by extending Ng’s original algorithm the case of the focused plenoptic camera.
In this exercise session students will gain in-depth experience in using the phase space representation of the plenoptic function as a fundamental tool for analysis and computation. Students will also have the opportunity to develop GPU-based software solutions for working with real plenoptic camera data.
Lecture series 5 – Light Field Displaying
by Dr. Tibor Balogh, Holografika Ltd., Hungary.
Part 1: Advanced 3D displays beyond stereo, overview
Attendants will have an insight of next generation 3D display technologies, beyond mainstream stereo/autostereo systems, like volumetric, holographic and light field displays. They will get an in-depth understanding of the basic principles of 3D displaying and the optical roadblocks related to known solutions, not always revealed.
Part 2: Light Field Displaying and the HoloVizio System
The HoloVizio system is the very first representation of real 3D light field displaying, capable to reconstruct natural, true 3D viewing. We will explain how light field displaying can provide perfect 3D impression, outperforming other approaches, the idea behind the HoloVizio displays, various display configurations, including world’s firsts, like the large-scale Cinema System and the Reality Series monitors, giving and unlimited full-angle 3D experience.
Part 3: The HoloVizio Software System & Applications
We explain the software environment of generating 3D images onto light field displays, where the main objective is to create a compatible platform interfacing existing model formats and 3D applications. An outlook will be presented on dedicated 3D light field display applications and industry trends.
Part 4: Light Field Content
While computer generated 3D content is a well understood field, live true 3D shoots raised several issues to be solved. The session will provide details around various methods of live 3D capture, representation, transmission of 3D light field data. Research results will be presented on sampling, reducing the number of cameras, formats like MVD4, interpolation, heavy extrapolation and requirements for a future light field format with compression.
In the exercise session students will have the chance to generate 3D images on their own on a small laboratory model HoloVizio display. They can download 3D models from public websites and transfer immediately onto the light field display. They can also experience 3D OpenGl applications directly on the screen and based on Part3, they can follow the whole process to render their 3D scenes in 3dsmax, convert it into light field format and have it on the screen.