Archive for the ‘hackEns’ Category

August 16th, 2013 @ 15:47

Kliment, of Printrun/Sprinter/4pi/… fame, has struck again. Sponsored by Joaz (jglauche@#reprap, who runs RepRapSource), he developed simarrange, a smart STL plater. Basically, you throw it a bunch of STL files or a directory with STL files, and it’ll attempt to fit them into plates of the given size (200x200mm by default, circular plates are also supported).
The algorithmic bits behind this are basically a greedy search, where the software tries to fit the items from biggest to smallest, and attempts to place each item closer to the bottom left corner first (or closer to the center) in any possible rotation. The spatial & angular rotation search granularity can be easily controlled using command line parameters. To decide whether an object can be placed at a given position or not, the object geometry is projected onto the 2D plate surface and bitmap-based logical tests are used to detect overlap.
This methods seems to produce tightly packed plates, as can be seen on these examples (the software outputs both the plate STL and these small visualizations):

plate24 plate19 plate07
plate22 plate18

Example plates produced by simarrange

I quickly jumped in to hack a few speedups, namely to avoid trying to fit objects which we already failed to fit on the current plate (in the case of making copies) and to detect if it is possible to make copies of each produced plate (which can save a lot of time if you’re plating dozens of printer kits for instance), and I contributed part of the code to make centered packing work. Kliment then added multicore parallelism yesterday (using OpenMP for full cross-platform support) to search over rotations in parallel. With all these optimizations, plating jobs which would have initially taken entire days now run in less than 10 minutes :)

Oh, and it’s already integrated to Printrun’s plater :)

Pronterface's integration of simarrange

Pronterface’s integration of simarrange

June 24th, 2013 @ 16:30

For more than one month now, I’ve been working on a daily basis on Printrun, one of the most used 3D printer host softwares (the thing that sends G-Code to your printer and lets your monitor how things are going). I had already given it a bit of love in the past, doing some internationalization and translation work, splitting the GUI code into a more manageable class-based patchwork or reworked the serial connection bits, but this batch of changes has much more influence on the user side.

Just to boast a little bit, let me quote Kliment, of RepRap/Sprinter/4pi/Printrun fame, who created and maintains Printrun:

13:52:12 < Kliment> On the sixth day, Kliment created the pronterface and the pronsole, and saw that it was…pretty crap. Then iXce came and helped out.

So, here are the main changes:

Cleanup of G-Code parsing

I initially came back to hacking Printrun because I thought loading G-code files was really too slow, and I quickly figured out that it was mostly due to the fact that we were parsing the G-code several times (there were at least 6 different G-code parsers in the code, and loaded G-code was parsed at least three times immediately upon loading). By rewriting and merging all those parsers, I was able to reduce the startup plus file loading plus visualization preparation time from about 13 seconds to 4.5 seconds on average, using a 3.8MB G-code file of about 140000 lines, while the new parser is also doing more things than the old ones.

Improvement of the remaining print time estimation

The previous remaining print time estimation was simply doing a cross-multiplication between elapsed time, number of sent G-code lines and number of remaining G-code lines, which happened to be very wrong in many cases: the early estimations were completely biased by the fact that we usually print the first few layers at a much slower speed than the other ones, and average G-code time means pretty much nothing (a long line can be 1 single G-code, while a tiny circle can be 50, while printing the line can take 20 seconds and the tiny circle less than 2, so you can have some G-codes which take 500 more time than some others). Thanks to the fact that we now have a complete understanding of the G-code (or actually just because we store more meta-information), we can now compute a much smarter estimate, based on the estimated total duration (the one you see after loading the file) computed on a per-layer basis, which we correct by incorporating a bias computed between this estimation and the actual print time for the elapsed layers. This way at any time the ETA should be within 10% of the actual ETA (unless you have a very, very weird machine), and should get closer and closer over time.

Optimization of the 2D print visualization

The 2D print visualization was really slow, and was even known for slowing down prints because screen refreshes were taking too much time. The drawing code was indeed redrawing every single line on each refresh, which would happen once for each sent G-code. We now buffer drawings as much as possible: complete redraws only occur when changing layers or resizing the canvas, every other refresh is done by updating a buffered bitmap with the newly added lines and drawing it to the screen. Visually, not much has changed except that the print bed extents should appear in a cleaner way (the yellowish background color does not leak anymore), that the grid should correctly start in the bottom left corner, and that thanks to other interface changes the resizing behavior should be much nicer.

The optimized 2D viewer, with proper resizing behavior, correct grid and different colors for background and build area

The optimized 2D viewer, with proper resizing behavior, correct grid and different colors for background and build area

Addition of a 3D viewer

Even better than optimizing the 2D viewer, we added a proper and very fast 3D viewer, based on tatlin, a standalone G-code and STL viewer originally built on GtkGLExt and PyOpenGL, which we ported to Pyglet so as not to add another dependency, and adapted it to use our already parsed G-code. As it uses the GPU to do most of the drawing (we store the model to display in GPU memory if possible, so that the CPU only has to tell “print from this vertex to this vertex with this color or GPU-stored color array”), it’s super light for the CPU, and as it’s done in 3D we can zoom (and zoom to point of interest), pan and rotate in 3D almost for free.

The new 3D viewer3D viewer (zoomed, panned and rotated)

The new 3D viewer

Addition of a clean, tabbed options dialog

Another nice improvement was the addition of a clean options dialog, with options grouped in different tabs and being displayed with appropriate widgets: checkboxes for boolean options, combo boxes for multiple-choices options, spin buttons for numbers, text fields for text options.

Old options dialogNew options dialog

Old vs. new option dialogs

Addition of a part excluder tool

This is probably the most original addition: a G-code based part extruder. How many times have you had a single part fail on a big build plate and had to stop everything just because the plastic was not being correctly deposited on the failed part and started messing up the whole print? Well, this tool is designed for stopping this kind of situations from happening. Just hit the Excluder entry of the File menu and draw a few rectangles over the area you want to stop printing, and you’re done (you can also do that to exclude parts of already sliced plates you don’t want to fully print nor reslice without the extra objects). It basically avoids going into the defined rectangles, resetting E values if needed instead of extruding, and seems to work fairly well with Slic3r-generated G-codes (I haven’t tested with any other slicing engine, and it could break because layer changes have to happen independently of X/Y moves for now).

Failed part excluder tool (right) and print simulator (left)

Failed part excluder tool (right) and print simulation (left) where I stopped printing two of the four objects after a few layers

Addition of new interface layouts

Based on the GUI code modularization from last year, we added two new interface layouts. The first one differs from the default one by placing the log viewer below the printer controls on the left side of the program, freeing more space for the print visualization:


Default (left) vs. compact (right) interface modes

The second mode is much more different. It splits the interface into two tabs, the first one holding all the printer controls, while the second one holds the print viewer and the log box. This makes it super compact and very well fit for tiny screens (it should fit in 700×600 pixels), especially when using wxWidgets 2.9 where it will use an appropriate widget for the main toolbar which wraps around automatically.


The two tabs of the Tabbed interface mode

Addition of a bigger temperature graph visualization

Last, you can now click on the small temperature graph (which can also optionally be disabled and complemented or replaced by the old temperatures gauges) to open a new window showing a much larger version of the temperatures graph for easier monitoring. Not the biggest thing ever, but still handy :)

Large temperature graph

The new large temperature graph

June 13th, 2013 @ 10:59

Here is one more hacking story, mixing 3D printers with the discovery of an awesome hobby: RC modelling.

TL;DR: Euromodel sucks, HobbyKing sort-of rocks, get an integrated IMU, watch our printed hexacopter teaser video :)

While coming back from the 2011 French Robotics Cup, we were discussing what project we could tackle next, and after making something that runs it quickly appeared that we had to make something that flies. We were willing to make something super stable, which could possibly go autonomously to a given GPS position (so an airplane was not an option) and we knew about Parrot’s AR Drone (a bluetooth-controlled quad-copter), so we decided to make something bigger: a hexacopter, which is a multicopter with 6 propellers.

We thus headed to the closest modelling shop (namely Euromodel near Gare de Lyon in Paris), nicely explained our needs and bought a batch of brushless motors, controllers, 8″ propellers, LiPo batteries and a LiPo charger as advised by the vendor. Using the dimensions of the motors, we designed the first version of our hexacopter, aimed for 3D printing:

Initial hexacopter design

Initial hexacopter design

The whole point of the design was to make something super sturdy, which would protect propellers from crashes. Traditional mutlicopters use a star-shaped design, where thin carbon fiber (or similar) arms stand out of a central frame, but we thought that materializing the hexagon and adding extra shells around the propellers would allow the drone to handle crash forces in a much more uniform way.

However, this design had a bunch of flaws that we hadn’t foreseen. The first one is that each part was fairly big, despite splitting the design quite a lot already (each propeller cylinder is split in two parts, and we could not split it more with the current design). The largest one was barely fitting the print bed of our Mendel printer (the part smallest bounding box was about 21cm*21cm), but was too big for my Prusa’s bed. Indeed, this is the first place where the vendor was not super nice: 8″ props are *huge*, but he claimed it was fairly hard to get smaller ones (and now that I’ve heard of HobbyKing, I definitely know this was a big, big lie). Another flaw is that the propeller protections were placed at the level of the engines, and not around the propeller which would be above the engines.

Given these flaws and given that our Mendel broke 3 days after we started printing this initial design (and that fixing it was beyond our motivation), we had to rework it to significantly shrink it to be able to print it on some other printer. We simply removed most of the outer shells, gaining a couple of centimeters, and removed the inner propeller shells, which were now useless. We also added a proper central electronics block with appropriate connections to the outer frame:

hackEns hexacopter Mark II

hackEns hexacopter Mark II

After pulling a few all-nighters to print the frame, the real issues with the hardware we had bought at Euromodel started to appear. Crap. It was complete crap, especially the speed controllers, which were cheap, now-discontinued models sold with some airplane sometimes in the past. We burnt a couple of them and still don’t know why they burnt, and burnt a few motors. One of the motors died because of another flaw of the original design, coupled with a flaw of the 3D printer: the inner diameter of the tubes holding the motors was a little bit too small (and the 3D printer was printing the circles even smaller than they were meant to be), and on one of them the motor got stuck while trying to spin, which lead to its death, producing a beautiful-yet-frightening orange-ish smoke.

hackENS's hexacopter being assembled

The beast being assembled

Sadly we had invested a bunch of money on this, and buying a whole new set of motors and controllers would have been too expensive (at least we thought so, once again, we did not really know about HobbyKing at the time), so we bought replacements. We bought the last two controllers Euromodel had, and bought replacement brushless motors from the US, and after a few more adjustments, discoveries, and fiddling with the electronics (such as painfully figuring out our ESC were only accepting 50Hz PWM input), we got it to leave the ground !

[vimeo width=”720px” height=”400px”][/vimeo] Flight and in-flight videos !

As you can see on the video, we had mounted a GoPro on the copter, which made really nice looking videos of the Cour Pasteur at ENS :)
Sadly, after just a few days, we broke it again, breaking two of the twelve arms that form the hexagon shape (damn, we knew the outer shells would have been useful, and we knew I’m a very, very unskilled RC flyer). We now need to either reprint new parts or modify the design to make it even sturdier.

Anyway, we also learnt that building such a machine is not all about designing the copter, printing the parts and assembling them. Flying a drone requires a very, very good flight control unit and sensors (and RC skills, but well, you can’t buy that). We initially planned to go super light and use an Arduino Nano plus a Wiimote-based sensor array, namely using the accelerometers from a Nunchuk and the gyroscopes from a Wii Motion Plus extension following the MultiWii models. As the Nano board was really too Nano, providing too few pins to be convenient enough, we ended up flying the MultiWii control software on a Flyduino Mega (an Arduino Mega fitted in a MikroKopter-compliant 5cm*5cm board (with 4.5cm spaced mounting holes)) and these sensors. The MultiWii software is an open source flight control system originally designed for AVR platforms such as Arduino. It’s a really cool piece of software, with an awesome community and a lot of good momentum, a bunch of tutorials and nice graphical tools.

We initially had a lot of stability issues, which were partially fixed by tuning the PID parameters. We could never completely get rid of all the issues, and eventually found out that our they were most likely due to the sensor boards not being properly lined up and stable in their mountings (i.e. the gyroscope board was not perfectly flat and parallel to the accelerometer board). We ended up ordering a FreeIMU board (which combines a 3-axis accelerometer, a 3 axis gyroscope, a barometer and a magnetometer, plus a sensor fusion device which processes all the readings), which seems to provide a much, much better signal than our original assembly according to the tests we made, though we could not field-test it as it arrived after the “final” crash. But we’ll fly it FreeIMU as soon as we repair the mechanical breakages, hopefully for the best :)

To conclude, if I were to conduct another such project, I’d probably directly jump onto using a proper IMU such as FreeIMU (or at least something well integrated on a single board), buy stuff from HobbyKing (they now have two warehouses directly in Europe, and shipping from Hong Kong is just a matter of patience) instead of spending hundreds of euros on crappy hardware (our crappy 15amps ESCs were 40€, HobbyKing has nice 30amps ones for less than 15€ in the German warehouse…) and would definitely reuse a MultiWii-based control platform (open source rocks, and the community is *great*).

June 1st, 2013 @ 13:20

Back in 2011, we did not build our first 3D printer just for the sole purpose of doing it, we also did it (and probably mostly) to be able to build other things, and more specifically robot parts. Taking part to the French Robotics Cup had been a project some of us had even before creating the club, but we were lacking momentum and tools. One of the main issues is that ENS is not an engineering school, but rather a pure theoretical science school, which left us with no prior experience and almost no access to proper hardware tools (the only ones we could have access to belonged to Physics labs and were securely locked).

Thus, being able to create our own parts and build them from the bottom up (as opposed to machining from top down) was a huge thing. With our custom-designed plastic parts and a few pieces of wood, we have been able to build very light yet sturdy robots (as opposed to the huge all-metal ones most engineering schools teams bring to the Cup).

Webcam holder used on our 2011 robot

Webcam holder used on our 2011 robot

We used stepper motors for propulsion. While this might sound like a weird choice, using steppers let us not spend a shitton of money (which we did not have anyway) on odometry and buy some servomotors and electronics instead. Indeed, as a stepper is controlled by sending steps which increment or decrement the motor axis rotation angle, we could move with a very decent accuracy by just trusting that our system worked, which implied calibrating things well so that tires did not skid and that we were not getting bumped by other robots. Sadly, while the second part was mostly verified in 2011 and 2012, it seems that in 2013 robots were getting more and more agressive, and we might need to add some form of odometry for future years.

We also tried to stay KISS (Keep It Simple, Stupid) for the actions and strategies. For instance in 2011, after spending a long while trying to write a super awesome AI that would user computer vision to understand the game board and act upon the other robot actions, we moved onto first following a scripted plan with hardcoded positions and actions and then running a simple IA which would try to detect game items and correctly act on them. Using this very simple technique scored us the 24th rank, which was an awesome achievement for a team with no prior experience and no engineering support (and I’m not just boasting there, that was other teams told us :)).

In 2012, the global theme was Pirates, and the main goal was to grab CDs (which pictured coins) and wooden blocks (picturing gold ingots) and bring them into a predefined area. A bunch of ingots and CDs were placed on towers called “totems”:

"Totems" full of monies !!1

“Totems” full of monies !!1

We decided that our main goal would be to simply get everything from one of these totems, not by using some fancy robotic arm, but simply by making a hollow robot which could go over the totem, with appropriately cut doors to capture the game items. I guess we were the only team to use this fun strategy, which paid quite well (we ranked 56 even though 3 or the 5 games went very bad because of hardware issues). Here’s a video of the robot beautifully performing its script:
[vimeo width=”600″ height=”400″][/vimeo]

To conclude, the main take home message of this post is: KISS. Don’t shoot way too high before having a simple prototype that works. Many teams coming to the French Robotics Cup bring heavy and super complicated robots with dozen of sensors, but this complexity kills everything and they can’t correctly move or perform their actions, because the robot is too heavy, because writing software is super painful when you have to deal with dozens of inputs and because debugging (both software and hardware) is close to impossible. Don’t try to be too smart too. Writing an efficient AI is totally non trivial, especially when you have to rely on noisy signals (such as computer vision and whatever technique you might use to locate the other robots). A simple hardcoded plan might work just as well and be 10 times easier and shorter to develop. So yeah, KISS.

May 17th, 2013 @ 14:37

Woha. It’s been such a long time since my last post. More than 2 years have passed since then, I’m now a PhD student in computer vision at Inria and I hacked on quite a bunch of projects.

Weirldy enough I did not do much pure software stuff (probably because I already code quite a lot for my research), but spent a lot (I mean, most) of my spare time on more hardware or hardware-oriented things, mostly in the scope of hackEns, the ENS hacking club we founded in October 2010.

Among other projects, we built a RepRap 3D printer, a Mendel one, in early 2011, with a first successful print on April, 22nd, 2011.

hackEns's Mendel 3D printer

hackEns’s Mendel 3D printer

A friend and I quickly started the replication process, building two Prusa i1 printers in August-September 2011 using plastic parts printed just-in-time with the Mendel. We continued the replication process with Jacques-Henri Jourdan, who slowly built a Prusa i2 starting early 2012, with a first successful print in late 2012, using plastic parts printed with my Prusa i1. That’s a beautiful chain of 3 generation of 3D printers. Over the builds, we have been amazed by the progressive technological improvements and the true capacity of the machine to auto-improve itself. During summer 2012 I almost reprinted all my printer plastic parts (the original ones were a little dirty and had clear defects that affected print quality), progressively transforming my Prusa i1 into a Prusa i2. One of the most striking things is that a 3D printer can produce better parts than the ones it is made of (and I’m not just talking about upgrades, but also about the print quality).

My Prusa 3D printer

My Prusa 3D printer

What’s also amazing is how fast the software side of 3D printing is evolving. Slicing, which is the operation of transforming a 3D object description in a printer-friendly list of commands, is definitely a non trivial task, both on the fundamental algorithmic side and on the engineering side. Indeed, it is done by slicing the object into successive layers along the Z axis and then computing where plastic should be deposited on this 2D layer, which requires a lot of geometrical computations and careful coding to correctly handle edge cases. Skeinforge was the first software I used to slice my objects, which was later improved upon in a fork known as SFACT. However these python-based approaches were quite slow overall (partly because of the language itself, partly because of the design decisions such as a complete modularity which might not be the best for such a software), and a few months later a new candidate appeared, Slic3r, developed from scratch in Perl with speed and efficiency in mind, and is now regularly evolving with better core algorithms and GUI features.

The possibility to print objects and the availability of 3D sensors such as Kinect (and software such as Kinect Fusion) has pushed replication possibilities way further than what we could have dreamed of 10 years ago. And for the hackers, it is now possible to build complex 3D objects from scratch, without having to painfully machine wood or metal by hand or with super-expensive CNC tools. If Thingiverse (or should we call it Makerbot Thingiverse now ?) is full of cute or pretty designs, it has also become a home for DIY hobbyist and hackers. Coming from a pure CS background with little engineering knowledge, we were able to develop a hexacopter (a multirotor unmanned aircraft with 6 engines) frame from scratch, printing it with our 3D printers and flying it (but I’ll tell you more about this project soon).

hackENS's hexacopter being assembled

hackENS’s hexacopter being assembled

The only sad thing overall is that some people are trying to leverage the interest in 3D printing technologies to make a lot of money by selling crappy printers. The number of 3D printer crowdfunding attempts which were started on Kickstarter is just unbelievable, often promising things that cannot be achieved with the technology they are using (like printing super huge objects which are not hollow without a proper heating chamber) or just trying to sell existing designs at crazy prices. It’s also sad how Makerbot is getting all the good press while at the same time they are going further and further away from the open source approach, but that’s another story.

February 28th, 2011 @ 23:28

Some of the non-security things that impressed me most at past Chaos Communication Congresses were the Project BlinkenLights-like talks, where buildings were transformed into large-scale pixel screens, enabling random users to control a light matrix spread over the whole surface of the building. From the original monochromatic setups in Berlin, Paris and Toronto to the RGB AllColorsAreBeautiful project in Munich, things have evolved quite a lot, and documentation about how the whole things are done is emerging.

This is why we, at hackEns, started working towards setting up such an installation on one of the fronts of our school dormitories in Montrouge, France (right outside Paris). The first step was obviously to design a working module to lighten each window, producing enough light intensity to do something impressive while still remaining cheap enough to deploy over the whole building (which is basically 6 floors with 8 windows on each floor, so 48 modules altogether). Let’s note that due to the specific structure of the building, our best call is to put the lights outside on the floor of the small balconies we have, as opposed to the previously mentioned projects where the lights were on the inside of the buildings.

Here is how the current module performs :

[vimeo width=”601″ height=”338″][/vimeo]

The module is remotely controlled from my laptop, which is connected to another computer through ssh, which is itself connected through an USB-to-serial wire to a RS485 transceiver which dispatches the commands to the light modules on the bus.
Our current control scheme is to use a tiny PIC10F per window to control each color component through PWM (given the output power of the LEDs, we actually use some transistors in the middle to do the power amplification). The PIC receives its orders on the RS485 bus. Note that since RS485 is limited to 32 receivers, we will actually use 2 RS485 buses with 24 receivers on each bus. The protocol we use on RS485 is quite simple, basically mentioning on a few bytes the module identifier and the three R/G/B components, with a few extra details for correct synchronization.

We tested a bunch of various high power LEDs, from simple 5mm high power LEDs to 3W StarLEDs through some very nice SuperFlux LEDs. Each kind of LEDs had its pros and cons, let’s list a few. The simple ones were really too direction, with a viewing angle of about 15°, so that it is almost impossible to melt the red/blue/green colors into something else. The superflux ones were pretty cool, produced quite nice melted colors, but were really not powerful enough (especially on the red side) so that we would need something between 5 or 10 of them to do something cool, leading to a lot of extra issues and requiring a lot more of PCB space. The StarLEDs are definitely the best on most aspects, a single StarLED produces awesomely bright colors, and a simple sheet of bakery paper is enough to ensure perfect color melting (which is already pretty good, except on the borders of the lightning area, due to the 3 single-color LEDs inside the StarLED being 0.5mm or so away of each other). The only issue with the StarLEDs is that they consume a lot of current, being powered at about 3V/1A. This leads to power supply and security issues, since we do not want to have wires with 50 amps running all over the front of the building. We intend to solve this issue by splitting the power supply scheme into a tree (with one or two branches per floor) and by powering each floor with higher voltages and lowering them to 12V or 5V with a switched-mode power supply on each floor.

The next milestones are to evaluate some cheap StarLEDs from Hong Kong, then to produce a prototype PCB and then to scale up to the building.

There are still a few issues right now, namely the fact that we need to find a cheap way to enclose our hardware to make the whole thing weatherproof, the fact that we need to get the approval of all the people living behind the windows we want to lighten, the approval of our lovely administration, and we need to find the money to find it. While the first three issues are pretty much a matter of time, any help on the financial side is obviously welcome. More specifically, if you know a company which could either provide financial support or hardware support (especially on the StarLED side, which are about 12 euros per LED, and which make it for more than 90% of the project budget), your help is more than welcome.

February 16th, 2011 @ 12:18

Yesterday we at hackEns received our BeagleBoard-xM, ordered from DigiKey during FOSDEM. Let’s first note for French people interested in getting such boards from DigiKey that we had to pay 39.15€ of VAT and similar fees to the UPS guy upon delivery. Now let’s move onto some context and tips about getting it to run.

The BeagleBoard-xM is a 8.25×8.25 cm board, powered by an ARM Cortex A8 CPU and relying on 512MB of RAM. It features, among other things, a microSD slot, a serial port, an ethernet port, an HDMI output, a S-Video port, 4 USB-A ports, 1 miniUSB OTG port and stereo input/output. There is basically all you need to build a simple desktop computer. Just plug in an USB keyboard, an USB mouse, an RJ-45 wire and your HDMI wire and boot it. Well, almost.

Before being able to run anything useful, you have to first power the board and then build a microSD card with an useful distribution on it.

First tip : connect your BeagleBoard-xM to your computer through the serial port. Buy an usb-to-serial adapter if needed, buy the serial wire if needed, but do not try to get things working blindly. You definitely won’t be able to debug any bootloader problem without a working serial link, and you’ll most likely waste much more time than you’d need to go to your local hardware shop and buy the required bits.

Now, let’s insert the provided microSD card into the slot, and move on.

TL;DR here : don’t try to power your BeagleBoard using USB OTG, use a regulated 5V DC power supply instead
Powering the board might seem easy, given that there is an USB OTG slot through which you can definitely feed power to the board to get it running (LEDs blinking), but there are two caveats. The first issue is that a single USB host port cannot supply enough power to the board to get it running smoothly. The most obvious symptom of this problem is that the board simply won’t boot (the bootloader will load the ramdisk and uncompress the kernel and hang there, or it might even start the bootsequence but will fail at some point with no obvious reason, or it might even reach the second issue, but even without the second issue it won’t be able to finish the bootsequence). As advised on some forums, you can use a Y-shaped USB wire like those you can get for 2.5″ USB external hard drives. This will enable you to give enough power to feed the BeagleBoard.
Yet, you will still encounter the second problem, which is that the kernel will panic at some point during the bootsequence, exhibiting a problem somewhere inside a musb_interrupt call. This seems to be because of a bug in the OTG module of the kernel used on the test system, which makes it impossible to boot with something plugged into the USB OTG port, so you’ll have to power your BeagleBoard using an external 5V power supply. We used a 2 amp regulated DC power supply for that purpose. Since it could only deliver 4.5V and 6V outputs, we had to use a 5V/1amp voltage regulator.

We were then able to boot on the test system, got a nice prompted, logged on, assessed the fact that we had a basic system, ran the mandatory testled tool, and halted to move on to the second part : setting up a complete system.

This is where the fun really begins, because most howtos are about the original BeagleBoard (non -xM), especially on the eLinux wiki, and that can be quite confusing. We’ll use Ångström distribution, which is a simple GNU/Linux distro targetted at embedded hardware, such as the BeagleBoard. This choice was made based on the fact that the Debian ARM packages are compiled for a more generic kind of ARM CPUs, on the fact that the current Ubuntu poorly performs on BeagleBoard-xM (it does not fully support the board’s CPU and downclocks it badly), and that the other competitors (Android, Maemo/Mer, Meego) did not look that great.

Basically, all you have to do to setup the microSD card is to follow the excellent instructions from Trey Weaver, with a few extra tips.

First, to be on the safe side, you can get all the files you need from the Ångström BeagleBoard demo fileserver : mkcard.txt, MLO, u-boot.bin, uImage, modules.tgz and the Angstrom-Beagleboard….tar.bz2 rootfs. You do not need to go through Narcissus.

Next, make SURE that the first file you copy to the boot filesystem is MLO, otherwise it won’t be able to boot (the bootloader has to be the very first bytes of the filesystem).

Next, when looking at other howtos, ignore all the bits about pressing the USER button, or about halting the autoboot to set the bootloader options : you do not need either. Indeed, the BeagleBoard-xM has no NAND flash to store the bootloader options, so doing the saveenv command of the bootloader will just make it hang. Instead, if you want to specify your own bootloader options, you have to produce a boot.scr file. To do so, install the mkimage tool (it’s packaged in debian and ubuntu as uboot-mkimage, it originally comes from OpenWRT bootloader u-boot which we use here). Then create a file (let’s call it boot_cmd for instance) containing the boot commands you want ; a basic command file is the following :
setenv bootargs 'console=ttyS2,115200n8 root=/dev/mmcblk0p2 rootfstype=ext3 rootwait'
mmc init
fatload mmc 0 0x80300000 uImage
bootm 0x80300000
Build the boot.scr file from it :
mkimage -A arm -O linux -T script -C none -a 0 -e 0 -n "Beagleboard-xM boot script" -d boot_cmd boot.scr
Now all you have to do is to copy boot.scr to the boot partition (i.e. alongside the MLO, u-boot.bin & uImage files) and reset your BeagleBoard.
This can be useful to specify the video mode to use, for instance I use the following bootargs to get correct rendering on my LCD :
setenv bootargs 'console=ttyS2,115200n8 omapfb.mode=dvi:1360x768MR-16@60 root=/dev/mmcblk0p2 rootfstype=ext3 rootwait'

You should now be able to put your microSD into your BeagleBoard and get it running fully (I have a few initial hiccups with the microSD not being recognized or so at this point, I just moved it back and forth to my laptop and it worked beautifully since then). You should get a working GNOME environment (after a nice gdm prompt).

Let’s note that Ångström uses opkg package manager, which works in a pretty straightforward way, with install/remove/update/upgrade commands. Note that you have to use the list command to search packages, i.e. you need to do opkg list opencv to look for opencv-related packages, and not opkg search opencv as you would do with apt.

I was very pleased to see that opencv was installed by default (I guess, or at least I did not have to install it after performing an initial opkg upgrade), that my Logitech webcam was recognized from scratch and that I could use mplayer tv:// without any more tinkering to use it.
The current image does not come with a build chain, so you have to install the required bits to build your own apps. I believe all you need is to install the task-native-sdk package to get everything (gcc, g++, make, binutils…) setup. I guess I only had to install git, vim,pkgconfig and cmake by hand to feel home once that was done.

I am now investigating some issues related to building stuff using OpenCV, similar to those described here on Stack Overflow. I’ll keep you updated.

February 16th, 2011 @ 10:16

Once again, after months of silence, it’s about time to break it. Since the last post I left the USA after about 6 months there, thousands of miles travelled, dozens of burgers and subs eaten and hundreds of hours working there (being quite nostalgic those days, I guess I’ll post a lot more on this soonish, probably through a large overview of all the places I visited while in the US). I was quite happy to come back to France close to my family, as well as to get back to school.

The school season started extremely fine with a lot of motivation from a few friends and I about getting a robot built for the next Eurobot French national contest. Soon after, another guy announced his will to create a hackerspace inside the school, quickly gathering a lot of interest from other students. We soon founded hackEns, a hacking group with various goals, from getting a Fab Lab running with the required tools and a RepRap 3D printer to building musical instruments from leftover wood pieces, through building our robot, hacking a PS-2 keylogger or fixing the broken light & sound hardware lying around. We even have a place (well, it’s in the school’s basement) where we can freely work.

Months later, the group is definitely not dead. We hosted the build of a Huxley RepRap of two artists from a nearby school, our own Mendel RepRap is almost done, our robot is nearing the prototyping step and we have a few new projects in line, from building a RC car with a thermal engine to setting up a Blinkenlights-like project on one of our school’s dormitories. I’ll try to cover our latest activities and findings over time. Keep posted, it’s gonna be legen… wait for it… dary !