This is a premium alert message you can set from Layout! Get Now!

Understanding OpenGL basics in Rust

0

Computer graphics is a fascinating, engaging field that blends technology with art. Recently at the time of writing, there have been some rapid developments in virtual and augmented reality technologies, which rely heavily on computer graphics. As a result, the study of graphics APIs has become more popular than ever before.

Among the various graphics APIs available, OpenGL stands out as the most user-friendly. While DirectX, Metal, and Vulkan are other options worth considering, they are either not cross-platform or more low-level than OpenGL, making them more difficult to learn.

To better frame the role of OpenGL in a modern game architecture, you can see it on the lowest level of the software stack, right above the operating system. In this article, we’ll learn the basics of working with OpenGL. Let’s get started!

Jump ahead:

OpenGL: A library from the `90s

OpenGL is an open source, cross-platform graphics API introduced in 1992 as a standard API for 3D graphics programming by Silicon Graphics Inc. (SGI).

Since then, OpenGL has undergone several updates and revisions, adding new features and improving its performance over the years. Nowadays, OpenGL is widely used in the computer graphics industry, particularly in the gaming industry, to create realistic 3D graphics for video games, scientific visualization, virtual reality, and CAD software. The latest version of OpenGL at the time of writing, v4.6, was released in 2017.

The boilerplate

To generate a basic “Hello, World!” application, we’ll start with the create new  command. We’ll modify it, first to add the dependencies and then the functionalities. To follow along with the code examples, you can check out the GitHub repo.

The first step in approaching OpenGL using Rust is to choose a crate that can provide a rusty interface to OpenGL.

For clarity, I’ll use glium, a high-level graphics library for Rust that provides a safe and convenient API for interacting with OpenGL. glium will handle accessing OpenGL, but OpenGL still needs a context, which includes all the necessary information for the GPU to process commands issued by the application, like buffers, textures, shaders, and rendering state.

For this, we’ll use glutin, a windowing library that provides a cross-platform API for creating windows and handling input events. It abstracts away many of the platform-specific details of window creation and provides a unified API that works on multiple platforms, including Windows, Linux, and macOS.

The diagram above offers an overall view of our application’s architecture. Notice how glium actually abstracts a lot, offering a unique interface to glutin and winit, which are in charge, respectively, of providing a context to OpenGL and interacting with the windowing system and its events.

Choosing the right setup of libraries for this article has been the most complex part, and there are plenty of alternatives available. For example, check this comparison between glium and gfx.

Step 1: The window and the event system

The code below is quite simple. We’ll use glium::glutin::EventsLoop for handling events; this is the scaffolding that will let us assemble the logic and the interactions with the rendering part, for which OpenGL is in charge:

extern crate glium;

fn main() {
 let event_loop = glium::glutin::event_loop::EventLoop::new();

 let wb = glium::glutin::window::WindowBuilder::new()
         .with_inner_size(glium::glutin::dpi::LogicalSize::new(800.0, 600.0))
         .with_title("Hello world");
 let cb = glium::glutin::ContextBuilder::new();
 let display = glium::Display::new(wb, cb, &event_loop).unwrap();

 event_loop.run(move | event, _, control_flow | 
  match event {
   glium::glutin::event::Event::WindowEvent { event, .. } => match event {
    glium::glutin::event::WindowEvent::CloseRequested => {
     *control_flow = glium::glutin::event_loop::ControlFlow::Exit;
     return;
    },
    _ => return,
   },
   _ => return,
  }
 });
}

Together with the EventsLoop, we’ll need the window. To do so, we’ll ask the WindowBuilder by passing it the parameters for building the window, in particular, the size and the title.

Next, we’ll create the OpenGL context, which is the function of the ContextBuilder. At this point, we can combine these three pieces by creating a Display.

The core concept of this simple program is passing the closure to the EventLoop.run() function. The closure will handle recognizing the events and manipulating the control flow to react to the different interactions that may happen with the window and the operating systems.

In this case, it will wait for the CloseRequested event. Then, click on the window close button. We’ll issue the Exit command to the ControlFlow, exiting the application.

The cargo run --example step0 command will execute the code that, as you may have noticed, is not terribly exciting in terms of results. It will just open an empty window with the specified title and dimensions.

Additionally, the cargo command will complain because the variable display is not used. It will be in the next steps, but, for the sake of clarity, it’s important that we understand its role in the program. Again, you can reference the complete source code on GitHub.

Step 2: Add a rectangle

Out of the box, OpenGL doesn’t provide any function dedicated to drawing shapes. It only provides the graphics pipeline, so if we want to draw a rectangle, we must pass the vertices to the pipeline and assemble the shape.

Still, for simplicity, we won’t use any additional libraries. We have to create a simple figure by hand, a rectangle, and specify the coordinates. The complete version of the code comment is on GitHub. In the code below, we’ll just report and comment the addition to the previous step:

#[derive(Copy, Clone)]
struct Vertex {
 position: [f32; 2],
}

implement_vertex!(Vertex, position);

let vertex1 = Vertex { position: [0.0, 0.0] };
let vertex2 = Vertex { position: [0.0, 1.0] };
let vertex3 = Vertex { position: [1.0, 1.0] };
let vertex4 = Vertex { position: [1.0, 0.0] };
let shape = vec![vertex1, vertex2, vertex3, vertex1, vertex3, vertex4];

let vertex_buffer = glium::VertexBuffer::
 new(&display, &shape).unwrap();
let indices = glium::index::
 NoIndices(glium::index::PrimitiveType::TrianglesList);

let vertex_shader_src = r#"
 #version 140
 in vec2 position;
 uniform mat4 matrix;
 out vec2 my_attr;      

 void main() {
  my_attr = position;
  gl_Position = matrix * vec4(position, 0.0, 1.0);
 }
"#;

let fragment_shader_src = r#"
 #version 140

 in vec2 my_attr;
 out vec4 color;

 void main() {
  color = vec4(my_attr, 0.0, 1.0);  
}
"#;

let program = glium::Program::
 from_source(&display, vertex_shader_src, fragment_shader_src, None).unwrap();

In the code above, we can see glium at work. First, we define the four vertices of the rectangle. And, by using glium, we define the VertexBuffer, which will collect them. Finally, by specifying the indices, we can divide them into two triangles.

Next, we define two shaders. By using glium’s program interface, we pass them to OpenGL to compile and make them available in the pipeline. The shaders are not complex; we just apply a matrix to the vertices and use the vertices’ coordinates to generate a little color in the fragment shader.

The following code is in charge of the actual drawing job and the animation:

let elapsed_time =
 std::time::Instant::now().duration_since(start_time).as_millis() as u64;

let wait_millis = match 1000 / TARGET_FPS >= elapsed_time {
    true => 1000 / TARGET_FPS - elapsed_time,
    false => 0
};
let new_inst = start_time + std::time::Duration::from_millis(wait_millis);

*control_flow =  glutin::event_loop::ControlFlow::WaitUntil(new_inst);

t += delta;

if (t > std::f32::consts::PI) || (t < -std::f32::consts::PI) {
    delta = -delta;
}

let mut target = display.draw();
target.clear_color(0.0, 0.0, 1.0, 1.0);

let uniforms = uniform! {
    matrix: [
        [ t.cos(), t.sin(), 0.0, 0.0],
        [-t.sin(), t.cos(), 0.0, 0.0],
        [0.0, 0.0, 1.0, 0.0],
        [0.0, 0.0, 0.0, 1.0f32],
    ]
};

target.draw(&vertex_buffer, &indices, &program, &uniforms,
            &Default::default()).unwrap();
target.finish().unwrap();

The first lines of code implement a simple mechanism of a fixed frame rate. It calculates how long it took to handle the events and how much time to wait for before the next frame. Once this calculation is done, we ControlFlow::WaitUntil the next frame.

Keep in mind that this function comes from glutin and not glium, which we expect because glutin is in charge of the window systems and events.

We perform rendering in two phases. First, we update the variable t, an angle, and calculate a rotation matrix. Next, we pass it together with the vertex buffer, the indices, and the shader programs to the draw function.

These two steps are enclosed between a call to the draw() method with no parameters that prepares the frame, and the finish() call, which completes the frame and swaps the frame buffers, exposing the newly rendered one and hiding the buffer for the next frame.

This is the very core of the rendering process. If you run the application, once again with cargo run, you’ll see a rotating, colorful rectangle bouncing back and forth.

Step 3: Animation

For the last step, we’ll concentrate on some interactions with the underlying part of the rendering process, the event loop:

glutin::event::Event::WindowEvent { event, .. } => match event {
    glutin::event::WindowEvent::CloseRequested => {
        *control_flow = glutin::event_loop::ControlFlow::Exit;
        return;
    },
    glutin::event::WindowEvent::KeyboardInput { input, .. } => 
if input.state == glutin::event::ElementState::Pressed {
        if let Some(key) = input.virtual_keycode {
            match key {
                glutin::event::VirtualKeyCode::C => delta = -delta,
                glutin::event::VirtualKeyCode::R => t = 0.0,
              _ => {}
            }
      }
    },
    _ => return,
},

In the code above, we add another case for the match event and look for KeyboardInput. In particular, we’ll look for the KeyboardInput events, whose state is ElementState:Pressed:.

Intuitively, we look for events that encompass key press actions from the keyboard. Once we’re sure we’re in the case above, we can implement some simple logic. If we press C, we change the verse of the rotation by changing the sign of the delta added to the angle t; if we press R, we reset the angle by setting t to 0.

The execution of cargo run in example above will now show our first interactive application with OpenGL and glium.

Conclusion

In this article, we explored three simple steps to build an interactive application with OpenGL and Rust. This is, of course, far away from a game engine, but still, it contains a few concepts that are relevant even when you use a cutting-edge game engine: the concepts of context, the frame buffer, the shaders programming, the whole events, and the event loop.

The post Understanding OpenGL basics in Rust appeared first on LogRocket Blog.



from LogRocket Blog https://ift.tt/RCpesob
Gain $200 in a week
via Read more

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.
Post a Comment

Search This Blog

To Top