WIP: A Betavoltaic Cell to Power All the Things

While browsing Thingiverse, a 3D model hosting site, I stumbled upon this. It’s a betavoltaic cell, meaning it generates electricity from a source of beta waves. In this case, beta waves are generated from a small vial of tritium gas. The beta waves excite phosphorous, coated on the vial, which powers a small array of solar cells. Tritium-fueled betavolatic cells can provide power without intervention for up to 20 years! I found some flaws in his design, though, so I’m designing my own.

The main problem with the design I found is that it uses extremely inefficient solar cells. The cells only have about 45% of their surface area as photovoltaic material, meaning the rest is wasted. The tritium vial puts out minimal light, so every photon counts. Furthermore, I wasn’t able to find a panel of the same size that would still fit, while using the full surface area. My design uses these. They aren’t much bigger, but have 5 times the power output, which is a colossal difference.

There is still research to do before maximum efficiency can be achieved.For example, at what light level do the solar cells perform most efficiently? The more panels are put in a betavoltaic cell, the less light each one gets. If their efficiency was perfectly linear, it would even out, and the betavoltaic cell would generate the same amount of power no matter the number of solar cells, so less solar cells would be better due to the reduced cost. However, this is the real world, and nothing is ever that perfect. Solar cells will have a peak efficiency at a certain light level. That peak efficiency determines the ideal number of cells.

The betavolatic cell. Unpainted.
The betavolatic cell. Unpainted.
The render of the cell. This is what it will look like when painted.

The files to make your own can be found on my git server.



Kinect as a Greenscreen

While looking for an old project on my hard drive, I happened to stumble upon a different project – A Kinect-enabled green-screen that I wrote months ago. The idea is it uses a user-defined threshold to determine what pixels to replace. The default is 2000mm, or 2m. Any pixel closer than that is counted as ‘foreground’, and the corresponding background-image-pixel will be replaced with the pixel from the Kinect camera. This method has a massive advantage: No special lighting, background, or equipment is needed. Not even a flat wall is needed!

Of course, I wrote the program as a proof of concept more than anything. As such, there is no option to record – It just presents the modified image in real time. In fact, many variables that are now customizable through the GUI could only be changed within the code before I rediscovered the project and decided to improve it slightly.

The code can be found here.

Of course, feel free to contribute. I’ve included a TODO list of features that I’ve neglected to add.