Friday, May 15, 2015

Fruit exploder at Marker's Faire




I'll be blowing up tomatoes, smartphones, and anything else that catches my fancy.   Also someone has offered a brand new Arduino Uno that we are going to raffle off for free (probably at 3:30).  Tentative showtimes at the Cupertinker booth in the main hall:

Saturday
  • 11:30 am
  • 1:30 pm
  • 3:30 pm
  • 5:30 pm
Sunday
  • 4:30 pm

Friday, April 3, 2015

Cone Beam Computed Tomography (CBCT) test shot

I've been playing with a dental x-ray sensor and getting some pretty good results.  Ultimately I'd like it to help reverse engineer PCBs and related security modules.  Here is a shot where you can clearly see bond wires:






This is still relatively early in my testing and I can probably still do a lot more to get better resolution.


With basic imaging working, I'm trying to take it a step further and do CBCT.  Sneak peak at setup:


 I made some test shots into a video but I'm still in the process of trying to stitch them into a proper 3d model.  I'm likely going to use plastimatch and/or RTK.

Once I get that working I'd like to try to laminography ("5DX").  Although I haven't looked too hard, I get the impression there might not be a lot of FOSS software to do this and so might require a bit more effort.

EDIT: slides from a presentation I did

Monday, March 9, 2015

80kV fruit exploding machine

About 5 years ago when I was in college at RPI I used a half ton capacitor bank to blow up apples and other meals.  I'm not in this particular video (stupid test) but this gives you an idea:https://www.youtube.com/watch?v=2EhuHs7IdM0

It was a lot of fun and I've wanted to do something like it again for a while.  I have a large number of ~400V rated pulse capacitors:


that I've used before to blow up wire, small railguns, etc with good success.  However, I also have an 80 kV death capacitor that I've always wanted to use:


The death part is pretty literal:


"WARNING: THE ENERGY STORED IN THIS CAPACITOR IS LETHAL".  Not "may be"...*IS*.  And they are probably right: don't touch it while its live.

But the capacitor is pretty boring without a way to power it.  4 years or so ago I used a 5kV capillary electrophoresis power supply to give it a token charge:


This made a reasonably large snap but, because capacitor energy is 0.5 C V^2 was only 0.4 % of the max energy.  After that it was put into storage and didn't see any use for some time due to lack of a large power supply.  I bought some capacitors and diodes to make a CW multiplier for my neon sign transformer but never built it.

Roll forward to present day.  This capacitor came out of an x-ray (diffraction or crystallography IIRC) system at RPI.  Somewhat related, I've been doing some x-ray work and had a unit fail (more on that in another post).  I opened it up and found I was able to salvage the high voltage (100kV) transformer:


For the following I'm currently only using the left half which seems to give about 60kV DC max.   Worth mentioning I could have grabbed the transformer but it was very heavy/large and left behind.

Started with some spark testing to verify the transformer worked.  The transformer must be submerged in oil to work properly:







About a gallon of oil in there and the box was tipped to one side that it fully covered.  Then I made some arcs:


The gap in the above picture is about 2"

But this is AC and to fully charge the capacitor I need DC.  I still had the original full wave rectifier that powered the capacitor but, for reasons I don't recall, I only have the bare diodes and not a rectifier assembly.  So next step was to assemble them into a rectifier module.  Originally I did a classic rectifier square design:


But then realized this required crossing an output high voltage line over an input source, potentially leading to a breakdown.  So redesigned int in an x pattern so that the polycarbonate sheet provided insulation between the two halves:


Per the labels, AC comes in on the left and DC exits on the right.

Next, we need a trigger.  In the simplest tests its fine to set a spark gap and let it automatically trigger but longer term   In the past I used a pneumatic switch to trigger a spark gap which worked pretty well.  I had some 2" stroke solenoids which aren't ideal (remember I said I could jump 2" above?) but was good for a first test.  Setup:


Note on the right you can see 40kV probe for "low voltage" tests.  The blue silicone tube goes to a compressor on the right.  I simply open up the regulator to close the switch.  I also built a polycarbonate platform that you can see in the above picture.

This system needs a way to safely discharge.  Long term I want an emergency stop, but short term I'm fine with simply bleeding off charge.  My goal is to have the system safely discharged (50V) within about 10 seconds.  I think this didn't work out in the end, but in theory this should be V = e^(-t /(RC)) where:
  • V = 50V
  • t = 10 s
  • R = ?
  • C = 0.25 uF
Solving for R gives you about  5.4M.  That was my original goal, for reasons I don't recall I instead did 800M (too high current draw?).  This discharges to 50V instead in about 185 seconds which still isn't too bad.

Putting it all together, this is what the prototype schematic looked like:


Some artistic license on the schematic: I loosely interchanged electrical paths with pneumatic paths.

Time for results.  The first victim was a galaxy 2 phone donated by a coworker.  Couple minute video of the test process here: https://www.youtube.com/watch?v=Q-sTKhyaM3Y

The phone wasn't blown in half but was a good start: the screen was cracked and burned.  Ultimately the phone succumbed though so I call it a success.

But the goal was to blow up some fruit.  First I tried placing a tomato directly inside the gap: https://www.youtube.com/watch?v=CgCAwcUGMeg&list=UUq3z1paLNFugoH3yTYokMIg

But this didn't work so well: a tomato has a lot of water that serves to nicely absorb the energy.  Next idea: put the electrodes inside the tomato and separate by 0.25".  This focuses the energy into a small space and forms a proper explosion.

This worked great: https://www.youtube.com/watch?v=E4cW6JSh3lc

Mission success!  There is one less tomato in the world.  I also decreased the camera ISO from auto to fixed 400 ISO to get better explosion coverage.  I also upped the frame rate from 30 fps to 60 fps to catch more gore.  I have a 1000 fps Casio that I'll try to use in the near future.

Finally, how to blow the phone in half?  I have a tenative agreement to upgrade the system from 800J to 12-13 kJ.  More to come


(note the ruler for scale)

Wednesday, February 11, 2015

Drying transformer oil

I acquired an x-ray head a bit back off of Craigslist:


and was experimenting with some screens:


to produce some images.  The setup is remotely switched and uses a webcam to view the intensifying screens meaning I don't have to be in the room while it runs.  I started to do some "low voltage" tests (maybe 50kV?) but didn't see any images.  I'm not sure what range the screens are sensitive to so I decided to crank up the voltage some.

In the past I had ran my head at 100kV for a number of experiments with no issue.  Unfortunately, as I turned up the voltage this time it generated an internal short a few seconds after turning it on.  Drat!  I had been warned these old GE heads can suck in moisture and its possible that's what happened

Fortunately, I drained the oil and it was still pretty clear indicating *potentially* no major damage.  So I'm taking two paths to get the system back online.

First, some cheap GE x-ray heads showed up on eBay so I picked up a few:




The bottom left unit is my original Craigslist special.  I have need of some high voltage DC supplies so I should be able to make use of  the lot even if they are excess to my x-ray needs.

However, these heads can still develop the same short if the oil has moisture.  I picked up some mineral oil and tried to dry it out under vacuum in a 2L reactor.  Unfortunately, it seemed to steadily bubble for some hours.

I talked to someone and they suggested that the trick is to get a lot of surface area to let the water out quicker.  So I tried to setup something resembling a vacuum distillation rig:


At the bottom is the 2L reactor with 24/40 joints.  The red hose is the vacuum feed.  The bottom hose has a siphon (like you'd find in a vacuum trap) that leads to a peristaltic pump (not shown) and then recirculates to the top.  From there it feeds a Vigreux condenser to give it lots of surface area to outgas.

I knew that the pump wouldn't prime under vacuum but figured it could be primed under atmosphere and then would circulate.  Unfortunately, the oil outgases heavily under vacuum, causing the pump to de-prime.

To solve this, the next step is to try to gravity feed the reactor into the pump.  Thus, even if it outgases, gravity will feed oil into the pump and cause it to prime under vacuum.  I'll likely have to put the reactor partly on its side which means that it could come apart and make a huge mess.  Most of the work will be trying to ensure this doesn't happen.  I have a clamp for the reactor lid and can tape the rest of the ports in.  Under normal circumstances, vacuum should hold everything together but it must be able to prime as well as survive returning to atmospheric pressure.

List of digitized chips

Collected list of chips that I know have had their layout, schematic, etc recovered: https://siliconpr0n.org/archive/doku.php?id=digitized

Sunday, January 4, 2015

siliconpr0n.org goes secure

I've had a self signed cert on siliconpr0n.org which had some limited use.  There should be a real SSL cert now.

EDIT: the map issue has been fixed

Thursday, January 1, 2015

Scaling up panotools some more

To recap, some things that I currently do to help stitch large IC photos with pr0ntools:
  • pr0nstitch.py: creates baseline .pto.  Instead of doing n to n image feature match, only matches features to adjacent images.  This reduces feature finding from O(n**2) to O(n)
  • pr0nts.py: creates image tiles from .pto file. This allows me to stitch large panoramas with limited RAM
  • Some misc wrapper scripts such as pr0npto.py that helped work around some peculiarities of tools like PToptimizer
For large panoramas, these were still a few problematic parts:
  • Big problem: PToptimizer is O(n**2).  Very slow for large image sets.  IIRC 1200 images took a week to optimize
  • Cropping and rotating .pto in Hugin is unnecessarily slow since I only use the outer images for alignment.  With n images only about O(n**0.5) are used
  • pr0nstitch.py was single threaded.  I recently picked up a beefy machine at a flea market and now had an incentive to take advantage of the extra cores
 Other issues:
  • pr0nstitch.py: I couldn't get cpfind to produce good results so I used closed source autopano at its core to do feature matching.  This has a lot of hacks to work through wine which is complicated and hard to setup
Lets see what we can do about these.

Regarding optimization, panotools has a tool called autoptimizer with a nifty feature: "-p Pairwise optimisation of yaw, pitch and roll, starting from first image."  This sounds a lot better than the O(n**2) optimization that PToptimizer dos.  Unfortunately, as stated, and, verified through my trials, it does not work for position.

So, I decided to try to see what I could do myself.  Most of the time is spent getting the panorama near its final position.  However,since the images are in an xy grid, we have a pretty good idea of where they'll be.  But better than that, since the images are simple xy transpositions we should be able to estimate pretty close to the final position simply by lining up control points from one image to another.  By taking the average control point distance from one image to another, I was able to very quickly construct a reasonably accurate optimized estimate.  The upper left image is taken as coordinate 0, 0 and everything is relative to it.  I then iterate the algorithm to fill in all remaining positions.  Presumably I could iterate the algorithm additional steps to optimize it further but at this time I simply hand off the pre-optimized project to PToptimizer.

This has a dramatic performance impact.  Here's some results from a project with 805 images:
  •  pr0npto --optimize: real    488m47.457s
    • Normal PToptimizer
  • pr0npto --pre-opt: real    0m58.843s
    • Pre-optimizer only with no PToptimizer final pass
    • rms error: 0.293938367201598 units
  • pr0npto --pre-opt-pt: real    29m39.604s
    • Pre-optimizer followed by PToptimize
    • rms error:  0.215644922039943 units
    • Took 3 iterations
--optimize and --pre-opt-pt should produce about the same final result.  I forgot to check what the final optimization score of --optimize was.  Anyway, above results show that the pre-optimizer was able to pre-position to better than average 1/3 pixel.  A final optimizer pass was able to slightly improve the result.

The next problem was crop/rotation.  There are two problems:
  • Its unnecessarily slow to create thumbnails for many images, most of which will just be thrown away
  • I'm not sure what the order of the preview algorithm is, but its much worse than O(n) on my laptop:
    • 74 images: 10.6 img / sec
    • 368 images: 2.9 img / sec
My solution: create a wrapper script (pr0nhugin) that eliminates the unused images.  This won't fix hugin's O(>n) issue but mitigates it by keeping n low.  Basically it deletes the unused images and opens a sub-project with the reduced image set.  After hugin closes the new crop/rotate parameters are merged back into the original project.

Sample results:
  • Original project: 368 images in 128 seconds
  • Reduced project: 74 images in 7 seconds
Original project in fast pano preview:



Reduced project in fast pano preview:



128 seconds isn't so bad but this is a smaller project and gets pretty bad for larger projects.  Could also probably eliminate every other image or something like that to speed it up more if needed.

Finally, I figured out what I was doing wrong with cpfind.  It was quite simple really, I basically just needed to use cpclean to get rid of the control point false positives.  Comparison of optimization results:
  • pr0nstitch w/ autopano: rms 1.09086625512561 units
  • cpfind/cpclean optimize pitch/raw/yaw xyz: rms 6.25849993203006 units
    • Only xyz should change, its optimizing things it shouldn't
  • cpfind/cpclean optimize xy: 0.510798407629321 units
    • IIRC this was through new pr0nstitch but can't recall for sure
Some quick tests  show that cpfind/cpclean meets or exceeds autopano performance.  Given the pains of coordinating through WINE (they have a Linux version but it doesn't work as well as the Windows one).

Now with the backend tool working smoother I was able to parallelize feature finding easier.  Basically pr0nstitch now spawns a bunch of worker threads that calculates features between two images.  A main thread merges these into a main project as workers complete matching sub-images.

There is also a stitch wrapper script that pulls these tools into a recommended workflow.

Summary of new round of improvement:
  • pr0nstitch.py
    • Parallelized
    •  Switch to panotools feature finding/cleaning.  Now fully open source and much easier to setup
  • pr0npto.py: new efficient --pre-opt-pt optimizer
  • pr0nhugin.py: hugin wrapper to edit reduced .pto files for fast editing
  • Added "stitch" workflow script