For that particular one I’m using A-Frame to handle the tracking of custom markers within the browser. The workflow for exporting a 3D object from Blender to a glTF, and then loading it into the Three.js scene is remarkably easy.
In the future I’d like to see how well one of my photogrammetry captured objects renders and see how well animated objects and interactivity works. I suspect as long as you’re mindful of most Three.js rendering limitations you’d be fine but who knows with the addition of the whole AR layer within the browser.
I’ve been experimenting with more photogrammetry tools and techniques. This time recreating 3D human heads using a Blender add-on called FaceBuilder which is developed by KeenTools. Using FaceBuilder is pretty straightforward:
you take 7 photos of the head you’d like to recreate at various angles,
import those photos into Blender using FaceBuilder,
align the FaceBuilder mesh to the 7 photos using their pinning system,
bake the photos into a texture used by the generated head mesh, and
clean up the baked texture by doing a bit of clone tool work over anything that may look off.
I’ve found that taking photos on a bright, overcast day gets the best results as well as trying to capture the 7 key photos as quickly as possible (before your subject shifts around too much) and well as detail shots around the head to be used to texture paint in anything that may look off when the textures are baked.
I’ve been doing some photogrammetry tests using my iPhone 11’s camera, Agisoft Metashape, and a Proko Anatomical Skull. The goal is to accurately convert a physical 3D object into something that can be used within a game. I think if I focus on taking less blurry photos (like the back of the skull were) the results will be spot on next time. It’s interesting how you can see the lack of details translating into the glitchiness of the generated mesh. There’s a bubbling particle-like explosion wherever the details drop below a certain threshold.
Anywho, after I get a workflow to reliably capture a 3D object the next step would be to retopologize and possibly rig the object in Blender. I’m anxious to be moving onto that soon. 🙂
Over the last couple of months I’ve been busy prototyping several shooting levels for a potential VR Game. They’re all extremely rough but I’m quite happy with most of them and am planning on expanding them into a full-on, actual game. Some of my favorite prototypes are a whack-a-mole style shooter, dodging asteroids as you’re shooting them apart, and defending against waves of humanoid attackers.
As you can see in the screenshots, the prototypes all take place within an office environment. I like the idea of a video game existing within augmented reality, with your co-workers unaware of what you’re actually doing, so I think that’s the direction I’ll be going in for now.
This office/AR concept also lends itself to a lot of interesting juxtapositioning of things. Like deers grazing beside a water cooler, vikings attacking around a conference table, or space ships flying past the building’s receptionist.
Over the last year or so I’ve become increasingly interested in virtual reality. I suppose it started when I saw Warren Spector’s keynote and attended a few of the VR sessions at 2016’s East Coast Game Conference. Before then I was intrigued by VR but didn’t care about investing time or money into it.
Once ECGC gave me the VR bug, I started by messing around with a couple different version of Google Cardboard with their Google VR SKD for Unity. The View-Master Virtual Reality headset is the nicest of the ones I own, although the official Google Cardboard viewer does has its charm too, and is among the least expensive to acquire. During this period I had worked up a couple different game prototypes to explore what what possible as a developer. I think the most successful was a reticle based maze prototype.
It seemed like mobile devices are the best entry point for most people to adopt VR (almost every modern phone has the appropriate hardware after all) so I purchased a Gear VR, Samsung Galaxy S7, and a SteelSeries Stratus XL controller to expand what I could experiment with. Out of the games that are available I think Land’s End, Adventure Time: Magic Man’s Head Games, and Dark Days were among the most enjoyable to play. At this point I had started developing a reticle-based point & click adventure game. I had actually written almost the entire script for the game but unfortunately my ability to create 3D assets is a bit limited so this project has been on hold until I can spend more time myself or find a 3D artist to work with.
Back around November 2016 I purchased an HTC Vive and last month I added an Oculus Rift to my collection. I’ve been working on a VR game that I’d like to release on SteamVR and was surprised by how the different styles of trackers affect gameplay. With the Oculus Rift and PlaystationVR, the tracking sensors are facing the user from one direction so its very easy to have a sensor loose sight of a controller by being blocked by the user’s body. This makes any game where the user can move and turn in 3D space impractical. You pretty much have to be physically facing a specific direction in the real world for it to work well.
It’s been interesting to see what each platform does well and leaves to be desired. Interestingly the cardboard or Gear VR platforms are the easiest to look around in since they don’t require a cable for their headsets. Out of all of the platforms I’ve tried the HTC Vive is my favorite. It very rarely looses track of the headset or controllers and has a simpler controller that feels natural and provides just the right amount of buttons to interact with.