I made a video to demonstrate the proof-of-concept for the Kros Operating System. Hope you enjoy!
In modern times, almost all software depends on other software. So in this more technical post, I’ll talk about the software used by Kros’s proof-of-concept, focusing on what I think are the biggest pieces that will continue to remain an important part of Kros as it evolves.
Every operating system needs a kernel to handle low level functions like process management and interprocess communications, and Linux is one of the most widely used open source kernels in the world. In fact, the Android operating system uses a modified Linux kernel.
The Kros proof-of-concept application is a Linux application that I’ve developed and tested almost exclusively on Arch Linux, which is my preferred Linux distribution. Kros successfully compiles and runs on Ubuntu as well.
Ultimately, even once Kros becomes a full operating system, it will continue to use the Linux kernel. At that point, however, Kros won’t be a Linux distribution like Ubuntu or Arch Linux. In fact, the API for developing Kros applications will be kernel-independent to leave open the possibility of switching to an alternative kernel (like Zircon) in the future.
OpenXR and Monado
OpenXR is an open API developed by the Khronos group, the same entity responsible for the widely-used graphics APIs OpenGL and Vulkan. Kros currently uses OpenXR for interacting with mixed reality hardware. As a result, Monado, which is the OpenXR runtime for Linux, is used for running Kros.
As Kros evolves into an independent operating system, it will need to provide its own mixed reality hardware interface to application developers, enabling them to write immersive Kros applications using OpenXR similar to the way Monado enables them to write Linux XR applications using OpenXR. In a way, the roles of Kros and OpenXR will be reversed with Kros hosting an OpenXR implementation rather than depending on it. Thankfully, Monado is open source, so Kros will be able to reuse its code.
OGRE is an open source 3D rendering engine. Unlike popular engines like Unreal and Unity, OGRE is strictly a 3D rendering engine and not a game engine.
The Kros demo uses OGRE for rendering the 3D user interface and for the 3D virtual reality demo game. Any other services offered by complete game engines would either be excessive or made redundant by Kros’s internal services, so an engine dedicated exclusively to 3D rendering is exactly what is needed.
While Kros may continue to use OGRE for the user interface layer, immersive application developers should be free to use any rendering engine or game engine that supports the OpenGL or Vulkan graphics APIs.
All of these big pieces are open source. That’s because Kros itself will be open source so there is a strong preference for its dependencies to also be open source. Even in cases where proprietary software is being used, such as the Leap Motion hand tracking, the idea is to eventually either utilize such software through open APIs (for example, graphics drivers via Vulkan) or remove such software in favor of open alternatives.
Thanks for reading, and if you haven’t already, subscribe to the blog or the Twitter feed @KrosDev.
As the Kros Operating System is being created specifically to unlock the full potential of mixed reality (VR and AR) hardware, I thought it might be appropriate to discuss the hardware – specifically what I’ve used for the development process, what else will work with Kros, and what happens as the hardware advances.
My Hardware Setup
I’ve been using the following mixed reality equipment for developing the Kros proof-of-concept:
- HTC Vive headset (without the external beacons)
- Stereolabs Zed Mini stereo camera
- Leap Motion hand tracker
If you have these items, you can put together the same setup by following Stereolab’s guide on using the Leap Motion and the ZED Mini.
As part of the process of making my setup portable, I’ll be adding the Nvidia Jetson AGX Xavier as the portable computer unit. In addition, though the Leap Motion has worked well, I’ll be switching to a hand tracking system that only requires the Zed Mini camera for reasons I’ll cover in a future post.
Along the way, I also tried two small individual cameras which worked well for augmented reality, but I found that using the Zed Mini resulted in easier stereo calibration.
My intention with Kros is to make an operating system for a specific purpose – to facilitate mixed reality computing – not for a specific set of hardware. As a result, the Kros system is just software and not hardware, and while we may offer some hardware products (such as developer kits), Kros aims to be completely hardware vendor neutral.
For a computer to run Kros, it needs:
- sufficient computing power
- a headset that provides AR and pure VR
- a headset mounted camera
- hand tracking
Those requirements can be met in various ways, such as:
- a VR headset with a mounted stereo camera for AR passthrough and hand tracking
- a VR headset that can turn on optical passthrough, with a mounted camera, plus hand tracking gloves
Kros will try to support any hardware configuration that satisfies the minimum functionality and has the drivers. I believe that many of the currently available VR and AR hardware components could be incorporated into such a setup.
Kros’s hardware vendor neutrality produces several advantages. To begin with, it will foster a competitive hardware landscape. Users with different systems will be able to use Kros, and applications developed for Kros will be able to run on most of those systems. Users with different budgets and needs will be able to choose the system that works best for them. In addition, Kros will foster innovation going forward since it will be able to incorporate the newest developments in mixed reality that become available regardless of the vendor.
I expect that VR and AR hardware will improve dramatically in the coming years, and it’s my intention that Kros will be updated to take full advantage of advances as they occur. Potential areas of improvement include the display and hand tracking. But whatever form these improvements take, the new computer experience that’s possible with Kros and mixed reality hardware will only get better.
Thanks for reading, and if you haven’t already, subscribe to the blog or the Twitter feed @KrosDev.
For the past week I’ve been working on restructuring the widget layout process in preparation for work on improving the performance which is the first of the three remaining major tasks mentioned in the introductory post. But rather than delving into the details of that restructuring, since widgets (such as buttons, sliders, and text fields) are part of the user interface, I think it would be more informative to give an overview of what makes the Kros user experience different.
The Kros User Experience
The primary objective of Kros is to enable the full potential of a portable device with a 3-dimensional mixed reality user interface. The advantages that such a system could provide are plentiful.
A 3D interface with a portable device provides an order of magnitude more space for computing activities. With a smart phone, tablet, or a traditional monitor (even a very large one), the amount of your field of view that you can actually take advantage of is relatively small. With Kros, you could have 360 degrees of screen space, and not just horizontally, but vertically too. So there’s no more need to shuffle windows around or change focus. You just interact.
A mixed reality user interface in 3 dimensions allows for more natural approaches to user input which are, well, naturally easier to use and also more enjoyable. To perform most tasks in Kros, the user’s hands directly push buttons, grab windows, and so forth without the need for an intermediate device such as a mouse. As much as possible, Kros’s virtual objects behave like the corresponding real objects, providing visual and spatial feedback for user actions. Thus, when you as the user press a Kros button, you see the button moving as your finger depresses it – just like a real button. And in most activities, whether work or play, you would use your hands in a natural way to carry out actions. For example, when playing a sword fighting game, you could hold your virtual sword with your real hand as you fight your virtual opponent.
In addition, a mixed reality user interface in 3 dimensions can provide immersive experiences. For instance, with a 3D painting/sculpting program running on Kros, you could use your hands to choose colors, then apply them in 3 dimensions as you move freely around a virtual object you’re creating. You could do this in augmented reality within your own living room or office or in full virtual reality where you might paint while standing on the rim of a virtual Grand Canyon or sculpt from beside the Trevi Fountain in a virtual Rome. The possibilities for experiencing new places or activities are almost limitless.
With these advantages – more usable space, more natural approaches to user input, and a more immersive experience – together with portability, Kros can unlock exciting new possibilities for computing.
So stay tuned, and if you haven’t already, subscribe to the blog or the Twitter feed @KrosDev.
Imagine a wearable portable all-purpose computer that has, not a small flat home screen, but a 3D home world that surrounds you. Imagine being able to do all your computer activities, from work to play, from the mundane to the sublime, with intuitive control in a mixed reality environment. Imagine how this computer would empower you to be more productive, be more creative, and have more enriching experiences than you could with a PC or smartphone.
This new computer experience is possible by combining VR hardware with the right operating system. The necessary hardware – a VR headset with video passthrough and hand tracking – is already available, but there is no operating system that produces this new computer experience – at least not yet. Current general purpose operating systems, such as Windows and Android, are not designed for mixed reality and so don’t take full advantage of the environment. The operating systems that are specifically for VR, like Oculus Quest’s fork of Android, are not general purpose and exist primarily to facilitate standalone VR experiences. Even the few attempts to bring a general purpose workspace to VR, like Windows Mixed Reality and vSpatial, ultimately emulate traditional computing with point-and-click interactions replacing mouse input.
That’s why I’m developing a purpose-built open source operating system called Kros for wearable portable computers with immersive display, natural input, full positional awareness, and a seamless transition between reality, augmented reality, and virtual reality.
I began full-time work on Kros in May of 2017. Since I knew I probably couldn’t produce a complete operating system working alone, I set an initial goal of creating a proof-of-concept program to, hopefully, wow potential backers and investors so I can raise the money needed to hire other software engineers to make Kros a true operating system.
During the past 3 years, I’ve made a lot of progress, and the proof-of-concept is almost complete. A large portion of the work has been on the user interface since I believe that to be the most important part of this proof-of-concept.
But I’ve done work on some operating system services and demo applications as well. And now, three major tasks remain before the proof-of-concept is complete and I start seeking funding:
- Improve performance by overhauling widget size layout.
- Finish porting the code to the Nvidia Jetson AGX Xavier, which I’m using as the computer unit of my portable computer for my proof-of-concept.
- Switch to hand tracking code running locally on the Jetson so that I don’t have to tether the proof-of-concept to a second computer.