Perspective

    There’s a lot of mystification around the painted image. Many have admitted they couldn’t make great works of art because they weren’t born with the natural skill to render the natural world. Everytime we attribute great work in the arts to this mysterious genius-quotient, we are overlooking how art is created by means of technology, owing itself to more than simply the hands of a cinquecento master.

    Paul Graham is particularly guilty of this in his essay “Hackers and Painters”. While attempting to praise the work of Leonardo DaVinci, he condescendingly puts emphasis on the painstaking work of painting every leaf in a bush. In fact, if we are to go along with Graham’s comparison, almost anything could be substituted for painting. A hero figure works hard and paves the way for more people to do what he does. This argument is not specific and you shouldn’t care about it. By allowing his audience to be mystified at DaVinci’s genius, Graham avoids discussing what exactly makes oil painting significant.

    There is a non-trivial relationship between Western oil painting and computing, and its love-child is 3D graphics. Let’s talk a bit about painting. In the 15th century, Filippo Brunelleschi sketched the Florentine skyline on a mirror. By extending the lines of the buildings, he discovered that they were in fact perspectival orthogonals that converged on a vanishing point. This discovery began the widespread use of perspective on the Italian penninsula and, eventually, the rest of Europe. Others had guessed at perspective, and some were disinterested. However, Brunelleschi’s discovery was significant because it revealed the geometric relationship of things in space to our perception of them. Whether beautiful or devotional, this method served to create an image that mimicked perception rather than coding objects' relationships symbolically.

    Brunelleschi was not gifted with the ability to perfectly draw a skyline. Optical devices allowed him to separate the world that he perceived from its planar representation. Rather than having to paint everything outside his window, he could isolate that image to a flat surface. This is technology used interestingly. This is a hack. In 2002, artist David Hockney and physicist Charles M. Falco published a treatise on their theory that the photographic realism achieved by renaissance art is the result of optical technology: camera obscuras, camera lucidas, curved mirrors. Furthermore, artists like Giotto were calculating the orthogonals using rudimentary algebra.

    Painters used technology to discover the transformation of a three-dimensional space into a two-dimensional image as perceived from a position defined in that space. Computer graphics use a very similar understanding of perspective to project a scene. A projection is a flattening of three-dimensional coordinates such that they can be presented on a two-dimensional surface. When we play video-games or work in CAD, we are experiencing the graphics engine rendering the scene by transforming a matrix of 3D coordinates into 2D coordinates. Oil painting happens at 50 frames per second.

    In order for us to program this into the computer, we have to define this operation mathematically such that it is iterable. This is done through matrix multiplication. The graphic below demonstrates how the transformation matrix can be divided up into different operations on coordinates that follow the same rules.

    I’ve begun to think a lot more about perspective since working with WebGL through Three.js. I had the issue of trying to create a ray that would intersect with objects on a field, allowing me to select those objects and move them with my cursor. This is a somewhat complex operation that is heavily simplified in the Three.js library.

    When we click a screen, our computer doesn’t intuit where in three-dimensional space where that click lands. The click corresponds to x- and y-coordinates. These clicks have to be extended into space using a ray-caster.

gallery.js

1
var raycaster = new THREE.Raycaster(camera, 3d_vector);

Three.js

1
2
3
4
5
6
7
8
9
10
11
12
13
14
THREE.Raycaster = function ( origin, direction, near, far ) {

        this.ray = new THREE.Ray( origin, direction );
        // direction is assumed to be normalized (for accurate distance
        this.near = near || 0;
        this.far = far || Infinity;

        this.params = {
                Sprite: {},
                Mesh: {},
                PointCloud: { threshold: 1 },
                LOD: {},
        };
};

    This raycaster object is set with the coordinates of the camera and a three-dimensional vector that is constructed from the mouse-click. The mouse-click’s z-coordinate is set to 1, the near frustum, thus the mouse is told that it intersects with the frame of the screen as a plane in the 3D environment. When we ask the raycaster what it intersects with in the scene,it creates a line of spheres extending into the scene along this line and checks if those spheres intersect with the coordinates of any of the objects. This is an extremely basic explanation of hundreds of lines of code. Nonetheless, the logic of it can be understood. Three.js gets around the problem of unprojecting a vector by drawing a three-dimensional line between two points.