loading 3d model vs drawing 3d model in game maker

The world of 3D graphics tin can be very intimidating to become into. Whether you lot just want to create an interactive 3D logo, or design a fully fledged game, if you don't know the principles of 3D rendering, y'all're stuck using a library that abstracts out a lot of things.

Using a library can be just the right tool, and JavaScript has an amazing open source one in the form of three.js. In that location are some disadvantages to using pre-made solutions, though:

  • They tin have many features that you don't plan to use. The size of the minified base three.js features is effectually 500kB, and whatever extra features (loading actual model files is i of them) brand the payload fifty-fifty larger. Transferring that much data just to testify a spinning logo on your website would be a waste.
  • An actress layer of brainchild tin can make otherwise easy modifications hard to do. Your creative way of shading an object on the screen can either be straightforward to implement or require tens of hours of work to incorporate into the library's abstractions.
  • While the library is optimized very well in most scenarios, a lot of bells and whistles can be cut out for your use instance. The renderer can cause sure procedures to run millions of times on the graphics card. Every instruction removed from such a procedure means that a weaker graphics menu tin handle your content without problems.

Fifty-fifty if you decide to employ a high-level graphics library, having bones noesis of the things under the hood allows y'all to use it more effectively. Libraries tin can also have avant-garde features, similar ShaderMaterial in three.js. Knowing the principles of graphics rendering allows you to apply such features.

Illustration of a 3D Toptal logo on a WebGL canvas

Our goal is to give a brusk introduction to all the key concepts backside rendering 3D graphics and using WebGL to implement them. You will see the most common thing that is done, which is showing and moving 3D objects in an empty space.

The final code is available for you to fork and play around with.

Representing 3D Models

The first thing you would need to empathise is how 3D models are represented. A model is made of a mesh of triangles. Each triangle is represented past three vertices, for each of the corners of the triangle. There are three nigh common backdrop attached to vertices.

Vertex Position

Position is the most intuitive property of a vertex. Information technology is the position in 3D infinite, represented by a 3D vector of coordinates. If yous know the exact coordinates of three points in infinite, you would accept all the data you need to draw a unproblematic triangle betwixt them. To make models look actually practiced when rendered, there are a couple more things that need to exist provided to the renderer.

Vertex Normal

Spheres with the same wireframe, that have flat and smooth shading applied

Consider the two models in a higher place. They consist of the same vertex positions, yet look totally different when rendered. How is that possible?

Besides telling the renderer where nosotros want a vertex to exist located, we tin besides requite it a hint on how the surface is slanted in that exact position. The hint is in the class of the normal of the surface at that specific point on the model, represented with a 3D vector. The following prototype should give yous a more than descriptive look at how that is handled.

Comparison between normals for flat and smooth shading

The left and right surface correspond to the left and right ball in the previous prototype, respectively. The ruby arrows represent normals that are specified for a vertex, while the blue arrows correspond the renderer's calculations of how the normal should look for all the points between the vertices. The image shows a demonstration for second space, but the same principle applies in 3D.

The normal is a hint for how lights volition illuminate the surface. The closer a light ray's direction is to the normal, the brighter the point is. Having gradual changes in the normal direction causes low-cal gradients, while having sharp changes with no changes in-between causes surfaces with constant illumination beyond them, and sudden changes in illumination between them.

Texture Coordinates

The last significant holding are texture coordinates, ordinarily referred to as UV mapping. Y'all take a model, and a texture that you want to apply to it. The texture has various areas on it, representing images that we want to apply to different parts of the model. There has to be a manner to marker which triangle should be represented with which part of the texture. That's where texture mapping comes in.

For each vertex, we mark ii coordinates, U and V. These coordinates represent a position on the texture, with U representing the horizontal centrality, and V the vertical centrality. The values aren't in pixels, but a percentage position within the epitome. The bottom-left corner of the image is represented with two zeros, while the height-correct is represented with two ones.

A triangle is just painted by taking the UV coordinates of each vertex in the triangle, and applying the paradigm that is captured betwixt those coordinates on the texture.

Demonstration of UV mapping, with one patch highlighted, and seams visible on the model

You can meet a sit-in of UV mapping on the epitome above. The spherical model was taken, and cut into parts that are small plenty to exist flattened onto a 2D surface. The seams where the cuts were made are marked with thicker lines. One of the patches has been highlighted, so y'all tin can nicely see how things match. You tin also run into how a seam through the middle of the grin places parts of the mouth into two different patches.

The wireframes aren't part of the texture, only simply overlayed over the paradigm so you can see how things map together.

Loading an OBJ Model

Believe it or not, this is all you lot demand to know to create your own elementary model loader. The OBJ file format is simple plenty to implement a parser in a few lines of lawmaking.

The file lists vertex positions in a 5 <float> <float> <float> format, with an optional fourth bladder, which we volition ignore, to keep things unproblematic. Vertex normals are represented similarly with vn <float> <float> <bladder>. Finally, texture coordinates are represented with vt <float> <float>, with an optional third float which we shall ignore. In all three cases, the floats represent the corresponding coordinates. These iii properties are accumulated in three arrays.

Faces are represented with groups of vertices. Each vertex is represented with the index of each of the backdrop, whereby indices start at 1. In that location are various means this is represented, but nosotros will stick to the f v1/vt1/vn1 v2/vt2/vn2 v3/vt3/vn3 format, requiring all iii backdrop to be provided, and limiting the number of vertices per confront to three. All of these limitations are being done to keep the loader as simple as possible, since all other options require some actress petty processing before they are in a format that WebGL likes.

We've put in a lot of requirements for our file loader. That may audio limiting, but 3D modeling applications tend to requite you the power to set those limitations when exporting a model as an OBJ file.

The following code parses a string representing an OBJ file, and creates a model in the form of an assortment of faces.

          office Geometry (faces) {   this.faces = faces || [] }  // Parses an OBJ file, passed as a string Geometry.parseOBJ = part (src) {   var POSITION = /^v\s+([\d\.\+\-eE]+)\south+([\d\.\+\-eE]+)\southward+([\d\.\+\-eE]+)/   var NORMAL = /^vn\s+([\d\.\+\-eE]+)\s+([\d\.\+\-eE]+)\s+([\d\.\+\-eE]+)/   var UV = /^vt\due south+([\d\.\+\-eE]+)\s+([\d\.\+\-eE]+)/   var Face up = /^f\southward+(-?\d+)\/(-?\d+)\/(-?\d+)\s+(-?\d+)\/(-?\d+)\/(-?\d+)\s+(-?\d+)\/(-?\d+)\/(-?\d+)(?:\s+(-?\d+)\/(-?\d+)\/(-?\d+))?/    lines = src.divide('\northward')   var positions = []   var uvs = []   var normals = []   var faces = []   lines.forEach(office (line) {     // Match each line of the file against various RegEx-es     var event     if ((result = POSITION.exec(line)) != null) {       // Add new vertex position       positions.push(new Vector3(parseFloat(result[1]), parseFloat(consequence[2]), parseFloat(result[3])))     } else if ((effect = NORMAL.exec(line)) != nada) {       // Add new vertex normal       normals.push(new Vector3(parseFloat(consequence[1]), parseFloat(result[2]), parseFloat(result[3])))     } else if ((event = UV.exec(line)) != null) {       // Add new texture mapping point       uvs.push(new Vector2(parseFloat(result[one]), i - parseFloat(result[2])))     } else if ((result = Confront.exec(line)) != nix) {       // Add together new face       var vertices = []       // Create three vertices from the passed i-indexed indices       for (var i = one; i < 10; i += three) {         var part = result.slice(i, i + three)         var position = positions[parseInt(part[0]) - 1]         var uv = uvs[parseInt(part[one]) - 1]         var normal = normals[parseInt(role[2]) - 1]         vertices.push(new Vertex(position, normal, uv))       }       faces.button(new Face(vertices))     }   })    return new Geometry(faces) }  // Loads an OBJ file from the given URL, and returns it every bit a promise Geometry.loadOBJ = office (url) {   return new Promise(function (resolve) {     var xhr = new XMLHttpRequest()     xhr.onreadystatechange = function () {       if (xhr.readyState == XMLHttpRequest.Done) {         resolve(Geometry.parseOBJ(xhr.responseText))       }     }     xhr.open('GET', url, true)     xhr.ship(naught)   }) }  function Face (vertices) {   this.vertices = vertices || [] }  role Vertex (position, normal, uv) {   this.position = position || new Vector3()   this.normal = normal || new Vector3()   this.uv = uv || new Vector2() }  office Vector3 (x, y, z) {   this.10 = Number(10) || 0   this.y = Number(y) || 0   this.z = Number(z) || 0 }  function Vector2 (x, y) {   this.x = Number(x) || 0   this.y = Number(y) || 0 }                  

The Geometry construction holds the verbal data needed to send a model to the graphics card to procedure. Before you lot do that though, you'd probably want to accept the ability to move the model around on the screen.

Performing Spatial Transformations

All the points in the model we loaded are relative to its coordinate arrangement. If we want to translate, rotate, and calibration the model, all nosotros need to exercise is perform that operation on its coordinate organisation. Coordinate system A, relative to coordinate organization B, is defined by the position of its middle as a vector p_ab, and the vector for each of its axes, x_ab, y_ab, and z_ab, representing the direction of that centrality. So if a signal moves by 10 on the x axis of coordinate arrangement A, so—in the coordinate system B—it volition motility in the direction of x_ab, multiplied by 10.

All of this information is stored in the following matrix form:

          x_ab.10  y_ab.ten  z_ab.x  p_ab.x x_ab.y  y_ab.y  z_ab.y  p_ab.y x_ab.z  y_ab.z  z_ab.z  p_ab.z      0       0       0       1                  

If we want to transform the 3D vector q, we just have to multiply the transformation matrix with the vector:

          q.x q.y q.z i                  

This causes the point to move past q.10 along the new x axis, by q.y along the new y axis, and by q.z forth the new z axis. Finally it causes the point to movement additionally by the p vector, which is the reason why we use a one every bit the final chemical element of the multiplication.

The big advantage of using these matrices is the fact that if we have multiple transformations to perform on the vertex, nosotros tin merge them into one transformation past multiplying their matrices, prior to transforming the vertex itself.

At that place are diverse transformations that can be performed, and nosotros'll accept a wait at the key ones.

No Transformation

If no transformations happen, and so the p vector is a zilch vector, the x vector is [i, 0, 0], y is [0, 1, 0], and z is [0, 0, 1]. From now on we'll refer to these values every bit the default values for these vectors. Applying these values gives us an identity matrix:

          one 0 0 0 0 1 0 0 0 0 1 0 0 0 0 one                  

This is a good starting point for chaining transformations.

Translation

Frame transformation for translation

When we perform translation, and then all the vectors except for the p vector have their default values. This results in the following matrix:

          1 0 0 p.x 0 1 0 p.y 0 0 1 p.z 0 0 0   i                  

Scaling

Frame transformation for scaling

Scaling a model means reducing the amount that each coordinate contributes to the position of a point. There is no compatible offset caused past scaling, and so the p vector keeps its default value. The default axis vectors should be multiplied past their respective scaling factors, which results in the following matrix:

          s_x   0   0 0   0 s_y   0 0   0   0 s_z 0   0   0   0 1                  

Hither s_x, s_y, and s_z represent the scaling practical to each axis.

Rotation

Frame transformation for rotation around the Z axis

The image above shows what happens when we rotate the coordinate frame around the Z centrality.

Rotation results in no uniform first, and so the p vector keeps its default value. Now things get a fleck trickier. Rotations cause movement along a sure centrality in the original coordinate system to move in a different management. So if we rotate a coordinate system by 45 degrees around the Z centrality, moving along the x axis of the original coordinate system causes motility in a diagonal direction between the 10 and y axis in the new coordinate system.

To continue things simple, we'll but show you how the transformation matrices wait for rotations around the chief axes.

          Around 10:         1         0         0 0         0  cos(phi)  sin(phi) 0         0 -sin(phi)  cos(phi) 0         0         0         0 1  Effectually Y:  cos(phi)         0  sin(phi) 0         0         1         0 0 -sin(phi)         0  cos(phi) 0         0         0         0 1  Around Z:  cos(phi) -sin(phi)         0 0  sin(phi)  cos(phi)         0 0         0         0         1 0         0         0         0 1                  

Implementation

All of this tin can be implemented every bit a class that stores 16 numbers, storing matrices in a column-major club.

          function Transformation () {   // Create an identity transformation   this.fields = [one, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1] }  // Multiply matrices, to chain transformations Transformation.prototype.mult = function (t) {   var output = new Transformation()   for (var row = 0; row < four; ++row) {     for (var col = 0; col < iv; ++col) {       var sum = 0       for (var k = 0; k < four; ++k) {         sum += this.fields[k * 4 + row] * t.fields[col * four + k]       }       output.fields[col * 4 + row] = sum     }   }   return output }  // Multiply past translation matrix Transformation.prototype.translate = function (10, y, z) {   var mat = new Transformation()   mat.fields[12] = Number(x) || 0   mat.fields[13] = Number(y) || 0   mat.fields[xiv] = Number(z) || 0   return this.mult(mat) }  // Multiply by scaling matrix Transformation.prototype.scale = function (x, y, z) {   var mat = new Transformation()   mat.fields[0] = Number(x) || 0   mat.fields[5] = Number(y) || 0   mat.fields[10] = Number(z) || 0   render this.mult(mat) }  // Multiply past rotation matrix effectually 10 centrality Transformation.image.rotateX = part (angle) {   angle = Number(angle) || 0   var c = Math.cos(angle)   var due south = Math.sin(bending)   var mat = new Transformation()   mat.fields[v] = c   mat.fields[10] = c   mat.fields[9] = -south   mat.fields[6] = s   return this.mult(mat) }  // Multiply past rotation matrix around Y axis Transformation.image.rotateY = role (bending) {   angle = Number(angle) || 0   var c = Math.cos(angle)   var s = Math.sin(angle)   var mat = new Transformation()   mat.fields[0] = c   mat.fields[10] = c   mat.fields[2] = -s   mat.fields[eight] = s   return this.mult(mat) }  // Multiply by rotation matrix effectually Z axis Transformation.prototype.rotateZ = function (angle) {   angle = Number(angle) || 0   var c = Math.cos(angle)   var s = Math.sin(angle)   var mat = new Transformation()   mat.fields[0] = c   mat.fields[5] = c   mat.fields[4] = -due south   mat.fields[ane] = s   render this.mult(mat) }                  

Looking through a Photographic camera

Here comes the key part of presenting objects on the screen: the camera. There are ii central components to a camera; namely, its position, and how it projects observed objects onto the screen.

Camera position is handled with one elementary pull a fast one on. In that location is no visual difference between moving the camera a meter forward, and moving the whole globe a meter backward. So naturally, we do the latter, past applying the changed of the matrix as a transformation.

The 2d key component is the way observed objects are projected onto the lens. In WebGL, everything visible on the screen is located in a box. The box spans between -1 and 1 on each centrality. Everything visible is within that box. We can utilise the aforementioned approach of transformation matrices to create a projection matrix.

Orthographic Projection

Rectangular space getting transformed into the proper framebuffer dimensions using orthographic projection

The simplest projection is orthographic projection. Y'all take a box in space, denoting the width, pinnacle and depth, with the assumption that its center is at the zip position. Then the projection resizes the box to fit it into the previously described box inside which WebGL observes objects. Since nosotros want to resize each dimension to ii, we scale each centrality by ii/size, whereby size is the dimension of the respective axis. A pocket-size caveat is the fact that nosotros're multiplying the Z axis with a negative. This is done because nosotros desire to flip the direction of that dimension. The last matrix has this form:

          two/width        0        0 0       0 2/height        0 0       0        0 -ii/depth 0       0        0        0 1                  

Perspective Projection

Frustum getting transformed into the proper framebuffer dimensions using perspective projection

We won't get through the details of how this projection is designed, just just use the final formula, which is pretty much standard past now. Nosotros tin can simplify information technology by placing the project in the zero position on the x and y centrality, making the right/left and top/bottom limits equal to width/two and height/two respectively. The parameters north and f correspond the most and far clipping planes, which are the smallest and largest distance a point can exist to exist captured past the camera. They are represented by the parallel sides of the frustum in the to a higher place paradigm.

A perspective projection is usually represented with a field of view (we'll use the vertical 1), aspect ratio, and the almost and far plane distances. That information tin be used to summate width and acme, and then the matrix can be created from the following template:

          ii*n/width          0           0           0         0 ii*n/height           0           0         0          0 (f+north)/(n-f) two*f*n/(n-f)         0          0          -1           0                  

To calculate the width and top, the following formulas tin can exist used:

          superlative = 2 * near * Math.tan(fov * Math.PI / 360) width = aspectRatio * pinnacle                  

The FOV (field of view) represents the vertical angle that the photographic camera captures with its lens. The aspect ratio represents the ratio between image width and height, and is based on the dimensions of the screen nosotros're rendering to.

Implementation

Now nosotros tin can represent a photographic camera as a class that stores the camera position and projection matrix. Nosotros also need to know how to calculate inverse transformations. Solving general matrix inversions can be problematic, but there is a simplified approach for our special example.

          function Photographic camera () {   this.position = new Transformation()   this.project = new Transformation() }  Camera.prototype.setOrthographic = function (width, top, depth) {   this.project = new Transformation()   this.projection.fields[0] = 2 / width   this.projection.fields[5] = ii / pinnacle   this.projection.fields[10] = -two / depth }  Camera.prototype.setPerspective = function (verticalFov, aspectRatio, about, far) {   var height_div_2n = Math.tan(verticalFov * Math.PI / 360)   var width_div_2n = aspectRatio * height_div_2n   this.project = new Transformation()   this.project.fields[0] = 1 / height_div_2n   this.projection.fields[5] = ane / width_div_2n   this.project.fields[ten] = (far + almost) / (near - far)   this.projection.fields[10] = -1   this.project.fields[14] = 2 * far * near / (near - far)   this.projection.fields[15] = 0 }  Camera.epitome.getInversePosition = function () {   var orig = this.position.fields   var dest = new Transformation()   var ten = orig[12]   var y = orig[13]   var z = orig[14]   // Transpose the rotation matrix   for (var i = 0; i < 3; ++i) {     for (var j = 0; j < 3; ++j) {       dest.fields[i * 4 + j] = orig[i + j * 4]     }   }    // Translation by -p will apply R^T, which is equal to R^-1   return dest.interpret(-x, -y, -z) }                  

This is the last slice nosotros need earlier nosotros can outset drawing things on the screen.

Drawing an Object with the WebGL Graphics Pipeline

The simplest surface you lot tin draw is a triangle. In fact, the bulk of things that y'all draw in 3D space consist of a corking number of triangles.

A basic look at what steps of the graphics pipeline do

The commencement matter that yous demand to understand is how the screen is represented in WebGL. Information technology is a 3D space, spanning between -ane and ane on the x, y, and z centrality. By default this z axis is not used, only yous are interested in 3D graphics, so you lot'll want to enable it correct away.

Having that in listen, what follows are three steps required to draw a triangle onto this surface.

Y'all tin ascertain three vertices, which would stand for the triangle yous want to describe. You lot serialize that data and send it over to the GPU (graphics processing unit of measurement). With a whole model available, y'all can practice that for all the triangles in the model. The vertex positions you give are in the local coordinate space of the model you lot've loaded. Put simply, the positions y'all provide are the exact ones from the file, and not the one yous go after performing matrix transformations.

Now that you've given the vertices to the GPU, y'all tell the GPU what logic to use when placing the vertices onto the screen. This stride will be used to utilize our matrix transformations. The GPU is very good at multiplying a lot of 4x4 matrices, so we'll put that ability to good use.

In the last step, the GPU volition rasterize that triangle. Rasterization is the process of taking vector graphics and determining which pixels of the screen need to exist painted for that vector graphics object to exist displayed. In our instance, the GPU is trying to decide which pixels are located inside each triangle. For each pixel, the GPU will ask you what colour you want information technology to be painted.

These are the four elements needed to draw anything you want, and they are the simplest example of a graphics pipeline. What follows is a look at each of them, and a elementary implementation.

The Default Framebuffer

The most of import element for a WebGL awarding is the WebGL context. You can admission it with gl = canvass.getContext('webgl'), or employ 'experimental-webgl' every bit a fallback, in instance the currently used browser doesn't support all WebGL features still. The canvas nosotros referred to is the DOM element of the sail we want to describe on. The context contains many things, amidst which is the default framebuffer.

Yous could loosely describe a framebuffer as whatever buffer (object) that y'all tin can describe on. By default, the default framebuffer stores the colour for each pixel of the sheet that the WebGL context is bound to. As described in the previous department, when we draw on the framebuffer, each pixel is located between -1 and one on the 10 and y axis. Something nosotros likewise mentioned is the fact that, by default, WebGL doesn't employ the z axis. That functionality tin can be enabled by running gl.enable(gl.DEPTH_TEST). Great, but what is a depth examination?

Enabling the depth test allows a pixel to store both color and depth. The depth is the z coordinate of that pixel. Subsequently you lot describe to a pixel at a sure depth z, to update the color of that pixel, you need to draw at a z position that is closer to the photographic camera. Otherwise, the depict attempt will exist ignored. This allows for the illusion of 3D, since cartoon objects that are behind other objects volition cause those objects to be occluded by objects in front end of them.

Any draws you perform stay on the screen until you lot tell them to get cleared. To do and so, y'all have to call gl.articulate(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT). This clears both the colour and depth buffer. To choice the colour that the cleared pixels are set up to, use gl.clearColor(red, green, blue, blastoff).

Let'southward create a renderer that uses a canvas and clears it upon request:

          part Renderer (sail) {   var gl = canvass.getContext('webgl') || canvas.getContext('experimental-webgl')   gl.enable(gl.DEPTH_TEST)   this.gl = gl }  Renderer.prototype.setClearColor = role (ruby-red, green, blue) {   gl.clearColor(blood-red / 255, green / 255, blueish / 255, i) }  Renderer.prototype.getContext = function () {   render this.gl }  Renderer.prototype.render = office () {   this.gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT) }  var renderer = new Renderer(document.getElementById('webgl-canvas')) renderer.setClearColor(100, 149, 237)  loop()  function loop () {   renderer.render()   requestAnimationFrame(loop) }                  

Attaching this script to the following HTML will requite you a bright blue rectangle on the screen

          <!DOCTYPE html> <html> <caput> </caput> <body>     <canvas id="webgl-canvas" width="800" height="500"></sail>     <script src="script.js"></script> </torso> </html>                  

The requestAnimationFrame call causes the loop to be called over again as soon equally the previous frame is done rendering and all outcome handling is finished.

Vertex Buffer Objects

The first thing y'all need to practise is define the vertices that you want to draw. You can exercise that past describing them via vectors in 3D infinite. After that, you want to motility that data into the GPU RAM, by creating a new Vertex Buffer Object (VBO).

A Buffer Object in general is an object that stores an array of memory chunks on the GPU. It being a VBO just denotes what the GPU tin use the memory for. Nearly of the time, Buffer Objects you create will be VBOs.

Yous can fill the VBO by taking all N vertices that we have and creating an array of floats with 3N elements for the vertex position and vertex normal VBOs, and 2N for the texture coordinates VBO. Each group of iii floats, or two floats for UV coordinates, represents individual coordinates of a vertex. Then we pass these arrays to the GPU, and our vertices are ready for the residual of the pipeline.

Since the data is now on the GPU RAM, you can delete it from the full general purpose RAM. That is, unless you want to after on change it, and upload it again. Each modification needs to exist followed by an upload, since modifications in our JS arrays don't apply to VBOs in the actual GPU RAM.

Below is a lawmaking example that provides all of the described functionality. An important note to brand is the fact that variables stored on the GPU are not garbage collected. That ways that we take to manually delete them one time we don't want to apply them any more. We will but give you an example for how that is done here, and will non focus on that concept further on. Deleting variables from the GPU is necessary simply if you plan to stop using sure geometry throughout the programme.

Nosotros also added serialization to our Geometry grade and elements within information technology.

          Geometry.epitome.vertexCount = part () {   return this.faces.length * 3 }  Geometry.image.positions = function () {   var answer = []   this.faces.forEach(office (face) {     confront.vertices.forEach(function (vertex) {       var v = vertex.position       answer.push button(v.ten, five.y, v.z)     })   })   render reply }  Geometry.prototype.normals = function () {   var answer = []   this.faces.forEach(function (face up) {     confront.vertices.forEach(function (vertex) {       var five = vertex.normal       reply.push(five.x, 5.y, 5.z)     })   })   return answer }  Geometry.paradigm.uvs = function () {   var answer = []   this.faces.forEach(role (face) {     face.vertices.forEach(part (vertex) {       var five = vertex.uv       reply.push(five.10, v.y)     })   })   render answer }  ////////////////////////////////  role VBO (gl, data, count) {   // Creates buffer object in GPU RAM where we tin shop anything   var bufferObject = gl.createBuffer()   // Tell which buffer object we desire to operate on as a VBO   gl.bindBuffer(gl.ARRAY_BUFFER, bufferObject)   // Write the data, and set the flag to optimize   // for rare changes to the data we're writing   gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(information), gl.STATIC_DRAW)   this.gl = gl   this.size = data.length / count   this.count = count   this.information = bufferObject }  VBO.epitome.destroy = function () {   // Gratis memory that is occupied past our buffer object   this.gl.deleteBuffer(this.data) }                  

The VBO data type generates the VBO in the passed WebGL context, based on the assortment passed every bit a 2d parameter.

Yous can see three calls to the gl context. The createBuffer() call creates the buffer. The bindBuffer() call tells the WebGL state machine to use this specific memory as the current VBO (ARRAY_BUFFER) for all hereafter operations, until told otherwise. After that, we fix the value of the electric current VBO to the provided data, with bufferData().

We also provide a destroy method that deletes our buffer object from the GPU RAM, past using deleteBuffer().

You can use iii VBOs and a transformation to describe all the properties of a mesh, together with its position.

          part Mesh (gl, geometry) {   var vertexCount = geometry.vertexCount()   this.positions = new VBO(gl, geometry.positions(), vertexCount)   this.normals = new VBO(gl, geometry.normals(), vertexCount)   this.uvs = new VBO(gl, geometry.uvs(), vertexCount)   this.vertexCount = vertexCount   this.position = new Transformation()   this.gl = gl }  Mesh.epitome.destroy = function () {   this.positions.destroy()   this.normals.destroy()   this.uvs.destroy() }                  

As an case, here is how nosotros can load a model, store its properties in the mesh, and and then destroy it:

          Geometry.loadOBJ('/assets/model.obj').so(function (geometry) {   var mesh = new Mesh(gl, geometry)   console.log(mesh)   mesh.destroy() })                  

Shaders

What follows is the previously described two-footstep process of moving points into desired positions and painting all individual pixels. To exercise this, we write a program that is run on the graphics carte du jour many times. This programme typically consists of at least two parts. The first part is a Vertex Shader, which is run for each vertex, and outputs where we should place the vertex on the screen, amidst other things. The second part is the Fragment Shader, which is run for each pixel that a triangle covers on the screen, and outputs the color that pixel should be painted to.

Vertex Shaders

Permit's say yous want to have a model that moves around left and right on the screen. In a naive approach, you could update the position of each vertex and resend it to the GPU. That process is expensive and slow. Alternatively, you would give a program for the GPU to run for each vertex, and do all those operations in parallel with a processor that is built for doing exactly that task. That is the role of a vertex shader.

A vertex shader is the role of the rendering pipeline that processes individual vertices. A call to the vertex shader receives a single vertex and outputs a single vertex after all possible transformations to the vertex are applied.

Shaders are written in GLSL. There are a lot of unique elements to this language, just most of the syntax is very C-like, then it should be understandable to most people.

There are three types of variables that get in and out of a vertex shader, and all of them serve a specific utilize:

  • attribute — These are inputs that hold specific backdrop of a vertex. Previously, we described the position of a vertex as an attribute, in the class of a three-element vector. You tin expect at attributes every bit values that draw 1 vertex.
  • uniform — These are inputs that are the same for every vertex within the same rendering phone call. Let's say that we want to be able to movement our model around, by defining a transformation matrix. You can utilise a uniform variable to describe that. You can point to resource on the GPU equally well, similar textures. You tin can expect at uniforms equally values that draw a model, or a part of a model.
  • varying — These are outputs that we pass to the fragment shader. Since there are potentially thousands of pixels for a triangle of vertices, each pixel volition receive an interpolated value for this variable, depending on the position. So if one vertex sends 500 every bit an output, and some other one 100, a pixel that is in the middle between them will receive 300 every bit an input for that variable. You lot can look at varyings as values that describe surfaces between vertices.

And so, let's say you lot want to create a vertex shader that receives a position, normal, and uv coordinates for each vertex, and a position, view (inverse photographic camera position), and projection matrix for each rendered object. Let'due south say you also desire to pigment individual pixels based on their uv coordinates and their normals. "How would that code await?" you might ask.

          attribute vec3 position; attribute vec3 normal; attribute vec2 uv; uniform mat4 model; compatible mat4 view; uniform mat4 projection; varying vec3 vNormal; varying vec2 vUv;  void main() {     vUv = uv;     vNormal = (model * vec4(normal, 0.)).xyz;     gl_Position = projection * view * model * vec4(position, 1.); }                  

Most of the elements here should exist self-explanatory. The fundamental thing to notice is the fact that there are no return values in the primary office. All values that we would want to return are assigned, either to varying variables, or to special variables. Here we assign to gl_Position, which is a four-dimensional vector, whereby the last dimension should e'er be set to one. Another strange matter you lot might notice is the way nosotros construct a vec4 out of the position vector. Yous tin can construct a vec4 past using four floats, two vec2s, or whatever other combination that results in 4 elements. There are a lot of seemingly strange blazon castings which make perfect sense once you're familiar with transformation matrices.

You can as well see that here we can perform matrix transformations extremely easily. GLSL is specifically made for this kind of work. The output position is calculated past multiplying the projection, view, and model matrix and applying it onto the position. The output normal is just transformed to the earth space. We'll explain later why we've stopped there with the normal transformations.

For now, nosotros will keep it simple, and movement on to painting private pixels.

Fragment Shaders

A fragment shader is the footstep later rasterization in the graphics pipeline. It generates color, depth, and other data for every pixel of the object that is existence painted.

The principles behind implementing fragment shaders are very similar to vertex shaders. At that place are three major differences, though:

  • There are no more varying outputs, and attribute inputs accept been replaced with varying inputs. We have just moved on in our pipeline, and things that are the output in the vertex shader are now inputs in the fragment shader.
  • Our but output now is gl_FragColor, which is a vec4. The elements represent red, green, blueish, and alpha (RGBA), respectively, with variables in the 0 to one range. You should keep alpha at 1, unless yous're doing transparency. Transparency is a fairly advanced concept though, so nosotros'll stick to opaque objects.
  • At the beginning of the fragment shader, you demand to set the float precision, which is important for interpolations. In well-nigh all cases, just stick to the lines from the following shader.

With that in listen, you can easily write a shader that paints the ruby-red channel based on the U position, green channel based on the V position, and sets the blue channel to maximum.

          #ifdef GL_ES precision highp float; 

0 Response to "loading 3d model vs drawing 3d model in game maker"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel