Hey everyone, I’ve decided to start posting weekly updates on the engine to keep you guys up to speed in between videos. I’ll be covering what I did over the week, updating you on my timelines, sharing screenshots/videos, etc.
Graphics
The big development of the last few months was that we’re finally at the point where I can hook the graphics engine back up, so I finally have visual things from the engine that I can show you, instead of just claiming to be doing things. The reason that it’s taken so long to get to this point, even though I technically had most of the graphics pipeline working months ago, I didn’t want to jump ahead and add in stuff like mesh rendering before I knew how my asset pipeline was going to work and then have to re-write even more stuff. Meaning I had to write most of the asset downloading backend before moving on to graphics. Starting first with actually uploading stuff to the server:
Server Web App
Originally I was going to go with a console-based application for uploading assets, but I realized that there was probably going to be other stuff you’re going to need to do on the server as well. I also realized that if asset servers are basically just web servers using a different communication protocol, why not just give it HTTP functionality and extremely simple webserver functionality? Then just have uploads happen through an API call? So that’s what I did January-February. And here’s the current result:
After a login page, you can access a one-page web app I built through React. This was a mistake. Though I have found React great for displaying lots of variable data, it’s terrible for having to edit said data. So I’ll probably refactor all of this at some point.
Here you can see the assets page. The show dependencies doesn’t do anything at the moment, at first I had the idea of only showing assets that weren’t depended on by anything to cut down on the list, but this wouldn’t actually have done much for organization. I’ve moved on to instead use a folder-style approach. It’s only implemented in my database and unused, but that’s the future plan.
Once we click on an asset we get a more detailed view of it, we can also click the create asset button to upload a new one, this brings us to a list of options.
After we click the only option we get this page:
As the name suggests, this allows us to extract all relevant assets (at the moment only meshes and transform data) and create an assembly from a .glb file. The tick boxes don’t do anything right now, though the data is sent to the server.
After creating an assembly we get something that looks like this:
Up at the top we’ll have a list of all the assets this assembly is dependent upon, right now I’m only showing meshes, but in the future, there will be textures, scripts, materials, etc. After that list we have a very basic editor for the entities:
The add component doesn’t do anything yet since I’ve yet to figure out how I want to handle doing that but you can actually edit transforms and parenting, that is if you understand what the components members actually are. I plan on changing it so that we actually extract the translation/rotation/scale components and store them as components so that they’re actually human-readable in the future as well, and I’ll also need to work figuring out a way to store names for member variables of components.
SO obviously a ton of work to be done there, even if I don’t scrap it all and start again with a different web framework. But let’s move on to the asset downloading pipeline:
Asset Downloading
So, now that we have an asset uploaded to the server, processed, and stored. Now our client should be able to download it. Right now what happens is this:
- First an “AssemblyRoot” component is added to an entity with the id of the assembly that it is a root for.
- The asset manager then finds this entity and asks an asset server for it. The ip/address for the server is stored in the id, for example: “localhost/hexID”
- Upon receiving the assembly asset the asset manager will then request all of the dependencies for that assembly. In the case of “Incremental assets” like meshes it only requests an “asset header” that gives the bare minimum amount of info for it.
- After receiving all the required assets or asset headers, we then run a few preprocessors on the asset to do things like change mesh indexes from the ones in the asset to where they ended up in the graphics pipeline.
- Then finally, we inject it into the entity component system with the original entity with the “AssemblyRoot” component as its parent.
Incremental Asset Loading
A core part of my vision for the engine is what I’m calling incremental loading of assets. As I currently attend a college in the absolute middle of nowhere, fast internet is hard to find, and the places that I’ve been living at so far have averaged around 12mbps. Meaning I have to wait sometimes up to 5 minutes for a 300mb VRChat world to load. And though I am moving to a place with much better internet in a few weeks it’s made me realize what people who are just stuck with bad internet have to go through.
My solution is to make it so that you don’t have to wait until everything is fully loaded. Of course, things that need to be complete will be, but stuff that isn’t essential to gameplay like meshes and textures could be loaded over time instead of waiting until they are completely loaded to display. So think of that generic Tron-style loading sequence where first the mesh flows into existence with a glowing wireframe appearance, and then a texture gradually covers it up. (Don’t worry, it’ll be customizable, but that’s the base appearance that I’ll be going for)
I decided to test out mesh loading first since I already have all the code for displaying them and it would look cool.
The way that I decided to go about this is actually pretty simple, where I would usually ask for a whole asset, I instead ask for an asset “header” that just contains all the information that I need to allocate the memory that the asset is going to need. For meshes, this means sending the length of the index buffer, the number of vertices, and if it has normal, tangent, and UV maps. Using that the client can then allocate all the memory on the graphics card that it needs for those things with just empty data. I store all the data for meshes in a single buffer so it was really important to know the offsets of everything from the beginning if I didn’t want to do any reallocation.
My first test of this was just sending the header first, and then the actual data as I normally would before after a slight delay:
I also intentionally slowed down the server’s responses for that one, so that I could just verify that everything was working.
Next up I tested out sending the index buffer first, and then the vertices, to… interesting results:
I realized that to have it actually look nice while loading, I would have to take into account how index buffers work. Since there’s no guarantee that the index buffer and the vertex buffers will have sorted orders (Unless I wanted to sort them in the upload phase) I can’t just send vertices in order without the index buffer using vertices that haven’t been set yet (thus resulting in triangle corners defaulting to (0, 0, 0) in the video)
The solution I came up with for this is actually pretty simple. I’d send the index buffer but after every index, I’d also send the corresponding vertex information, that way every vertex in use would be set. (I also used an array of booleans to make sure that I didn’t resend vertexes once they had already been sent.) And this was the result:
Visually, it turned out exactly how I wanted. Now, there are still some issues with the implementation behind the scenes. For instance, I’ve not created a way to check if the mesh is fully downloaded, there’s nothing re-asking for unsent data, and it’s not integrated with the graphics swap chain (meaning that rendering essentially has to pause every time a mesh is updated).
Future
Over the next few weeks, I’m probably going to be putting much less time into the engine as I’ll be pretty busy with moving to a new apartment, classes starting up, and hopefully getting a job. Also at this point, I’ve pretty much infinitely delayed the pong demo while I work on getting the asset pipeline more robust.
Next things I’ll be working on adding shaders and textures to the asset system, refactoring the graphics pipeline to use instancing (right now every mesh is a separate draw call), using buffers instead of push constants for mesh positions, making the light an actual system instead of hardcoded, figure out how to make mesh reloading play nice with the swap chain, and most importantly adding a lot more error handling into the netcode.
There’s also 30 something other things that I’m forgetting right now, but that’s all for this time! Expect future updates to be much shorter though, since this “weekly update” is covering several months.