Falcon Controllers and Null Pixels

I have, upon prodding from the beloved, gotten into the programmable LED pixel Christmas light game. We’ve put together a display driven by a Falcon F4V3 controller board.

I recently discovered the need to insert a couple of null pixels into the display (mostly because I wasn’t smart enough to buy any Falcon F-Amps before getting to this point) due to too-long data lines between elements.

The documentation for the controller was not particularly clear on the way to handle such null pixels. I saw that the ‘String Ports’ page on the controller management console had a ‘Null’ column in the configuration table. The documentation says this about the column:

Used to define the number of nodes which will not light up at the start of the string. These nodes will pass data to the next node but will not light up. This is useful when there is gap between display elements larger than your pixel spacing and you don’t want to cut/splice the string between the nodes to accommodate for this.

That description implied that these pixels would be invisible to the controller – that they would be treated as if they weren’t present. However, my experimentation with the setting implied otherwise: every time I increased the ‘Null’ setting, a pixel would fall off the end of the model.

My assumption was that a Null pixel was still consuming data from the strand (or, more precisely, that the controller was not inserting an extra pixel’s worth of data in the stream) but it was being set to ‘always off’. Further experimentation showed that was wrong: pixels added to the ‘Null’ column are invisible to the model. However, the number of pixels in the strand needs to account for Null pixels.

That is, if you set the ‘Null’ count for a strand to 5, you need to increase the ‘Pixel Count’ for the strand by 5 as well. Doing so adjusts the ‘End Channel’ for the strand up by a commensurate number of channels, but that doesn’t seem to reflect reality: I’ve added Null pixels that would (according to the controller interface) cause a channel conflict with another strand, but the model seems to work correctly.

Spring Scoped Proxies and ‘prototype’ beans

I thought I understood how Spring deals with prototype-scope beans: when injected into another bean as a dependency, or explicitly retrieved from a BeanFactory via getBean, a new instance is always created. Thus, when injected as a dependency the prototype bean effectively shares the same lifecycle as the bean it’s injected into (except that destroy listeners aren’t called).

That’s correct if scoped proxies aren’t being used, but is not correct if they are. If scoped proxies are enabled, a scoped proxy is instead injected, and a new instance of the prototype bean is created every time the proxy is accessed! I don’t see that behavior discussed in the Spring docs. It seems like it fulfils the goals of method injection in an easier fashion.

While that behavior might be useful at times, it seems a rather subtle way to enable it. If explicitly declaring the proxyMode when declaring bean scope, it’s obvious enough, but if declaring the type of scoped proxies you want across the system using scopedProxy on a @ComponentScan annotation (or scoped-proxy on a <context:component-scan> tag), this drastically changes the behavior of all prototype-scope beans it picks up.

Perhaps the best practice is to always explicitly declare a proxyMode attribute on all @Scope("prototype") annotations, to make the decision explicit.

Spring @Async annotations and interface-based proxying

The Spring Framework for Java supports an @Async annotation which specifies that the annotated method should be run asynchronously in a background thread. (See here for details.)

The docs did not [obviously] point out some important notes around usage of this annotation with interface-based proxying. After spending some time stepping through Spring code, I discovered the following:

When Using Interface-Based Proxying, Classes with @Async Methods Require Interfaces

Support for the @Async annotation is implemented using a Spring dynamic proxy. When Spring is configured to use interface-based proxies (as opposed to CGLib proxies), in order for a bean to have dynamic proxy:

  1. it must implement an interface, and
  2. you must inject the bean into dependent classes referencing the interface rather than the concrete class.

The same rules apply when using scoped beans. However, if you don’t fulfill the requirements in that case, you get an error at startup. With @Async-tagged methods, I don’t see any warning in the log, and it will simply not run the method asynchronously.

@Async Annotations Specifying an Executor Must Be On the Interface

If you specify an explicit executor to use in the value of the @Async annotation (e.g., @Async("myCustomExecutor")), the annotation must be on the interface method definition, not on the concrete method implementation.

If you put the annotation on the concrete method implementation, it will be run asynchronously. However, it will always use the default executor. This is because the code to determine the executor to use (in org.springframework.aop.interceptor.AsyncExecutionAspectSupport.determineAsyncExecutor(Method)) is looking for annotations on the method and class being referenced by the caller, which is always going to be the interface method due to the rules in the previous section.

Upgrading your Time Capsule hard drive, and migrating data

The 500GB disk in my first-gen Time Capsule became hopelessly full, with two Macs backing up to it and a slew of other media files stored on there to boot. Poking around on teh Google turned up some good tutorials on the mechanics of actually replacing the drive, particularly this one. Doing so is a pretty straightforward process for anybody who has ever cracked open a computer.

However, this Time Capsule has loads of data on it that I wanted to preserve, preferably in a quick-and-easy fashion, and I didn’t find much information on how to go about performing such a migration. It turns out to be not too difficult, but I made a few time-consuming missteps on the way. Hence my gift to you: everything you need to know to do it yourself.

First, a note on the drive

Without doing much research, simply thinking that quadrupling my capacity was a good goal, I picked up one of these drives, a WDC 2TB drive with EARS technology. I then found this comment, among others, that indicated Time Capsule might not support these drives, at least not without some jumpering.

I’m happy to report that my first-gen Time Capsule (running firmware 7.5.2) has had no issue dealing with that drive straight out of the box.

Partitioning

The Time Capsule requires a specific partitioning scheme to work. On my first attempt, I put the new drive in an enclosure, plugged it into my Mac, and gave it a single HFS+ partition using Disk Utility. When I put it into the Time Capsule, the disk was recognized but AirPort Utility reported a ‘problem’ with the disk. At that point using the ‘Erase’ button in AirPort Utility caused it to partition it in the required fashion. Doing that is the easiest and surest way to create the right disk structure, but it does require swapping the Time Capsule disks a couple of times before the process is complete.

It might be possible to create the required partitioning scheme manually prior to putting the drive into the Time Capsule, to avoid a swap. I’d be interested to hear from anybody who tries, to know if it’s practical. Here is what Time Capsule put on the disk (as discovered through Disk Utility after I took the drive out of the Time Capsule again and put it back in the USB enclosure):

  • At the start of the drive, a partition named APconfig, type ‘Mac OS Extended (Journaled)’, size 1.07 GB (1,073,741,824 Bytes, to be exact). This partition had a single non-hidden file in it, apparently a backup of some of the AirPort configuration data.
  • Immediately following, a partition named APswap, type ‘Mac OS Extended (Journaled)’, also of size 1.07 GB (1,073,741,824 Bytes, to be exact). This partition had no non-hidden files in it.
  • Finally, a partition filling the remaining space on the drive, named the same as the Time Capsule was named (via the AirPort utility). This partition was also of type ‘Mac OS Extended (Journaled)’.

The third partition is the only one visible to the user after the drive is in the Time Capsule, and contains backups/other user data.

Archiving data

My main goal was to migrate the data from the old Time Capsule disk to the new disk. Happily, AirPort Utility has a handy ‘Archive’ button that will help you do just that. It will only copy data to a USB-attached hard drive, so you’ll require a USB drive enclosure to perform the migration. To start the migration:

  1. Put the new, partitioned drive into a USB enclosure.
  2. Plug the USB enclosure into the Time Capsule.
  3. In AirPort Utility, go to the ‘Disks’ pane.
  4. Click the ‘Archive’ button.
  5. Select the third, large partition on the new disk as the target.
  6. Start the archiving process.

Copying my 460GB of data took a number of hours, so it’s best to be able to leave it overnight.

Once the archival is complete, you’ll need to do a bit of fiddling with the file structure to make the data usable by the Time Capsule. The archival process will put the entire contents of the main disk partition into a folder called “<Time Capsule name> Archive” (or something to that effect). Underneath that, I had a folder named ‘Shared’, which contained the shared-folder Time Capsule data. Since I have configured my Time Capsule to use user accounts, there was also a ‘Users’ folder, with another folder for each account inside of that.

These ‘Shared’ and ‘Users’ folders need to be in the root directory of the disk for them to be usable by the Time Capsule. So, you’ll need to:

  1. Plug the USB enclosure into your Mac
  2. Open the drive corresponding to the third, large partition on the disk
  3. This partition should contain a single folder, ‘<Time Capsule name> Archive’. Move the contents of this folder into the root folder of the drive, then delete the (now-empty) ‘Archive’ folder

Once this is done, the drive will be usable in the Time Capsule, and all of your old data will be accessible. Swap disks in the Time Capsule as discussed in the tutorial listed above, and off you go!

 

Movin’ on Up

Until now, this blog used Blogger. The reason for this was, primarily, lethargy. While not notable for much else, this blog is at least venerable, dating back to November of 2001, back before blogs were cool, man. Blogging platforms were rather more limited back then (or at least I was oblivious to better options), so I chose Blogger. I never bothered to change services since, well, I rarely bothered to write in it.

I’m also one of approximately three people who were using Blogger’s ‘publish via FTP’ option. I recently got an e-mail from Google saying that they were discontinuing the FTP publishing option because it was annoying them and because approximately three people were using it. With this turn of events I decided it was time to renovate a bit, so I installed WordPress and imported everything into that.

Having never used WordPress before, it’s slicker than I expected and I’m pretty impressed. I look forward to not writing in it.

ri documentation for MacPorts Ruby 1.9.1

My giving-back to the Internets for the day:

I installed Ruby 1.9.1 onto my Mac using MacPorts, and found that ‘ri’ didn’t work to start off with—the system classes weren’t on the documentation path (as noted in this issue). I tracked down the problem to this file:

/opt/local/lib/ruby1.9/1.9.1/rdoc/ri/paths.rb

To get it to work, change this line (line 31 in my install):

if m = /ruby/.match(RbConfig::CONFIG[‘RUBY_INSTALL_NAME’])

to look like this:

if m = /ruby1.9/.match(RbConfig::CONFIG[‘RUBY_INSTALL_NAME’])

Once this is done, ri1.9 should find the standard documentation.

The problem is caused by the non-standard install path used by MacPorts for this not-yet-mainstream version.

D-Link: Building Networks for Masochists

In our move to New Zealand, we’re trying to cut costs by buying used items where we can. We lucked into finding some graduating college guys who were clearing house and selling everything, so we bought a bunch of stuff for a good deal.

Amongst this stuff was a DSL modem/wireless router. We definitely needed such a beast, and new ones here are ungodly expensive, so that was cool. However, it was made by D-Link. I’d had limited experience with D-Link products, but I wasn’t particularly impressed b them. It was only a vague ambivalence, and it still seemed a good deal, so we went with it.

We get it home, and first thing I find is it has a British-style power plug on it, which is not at all the same as a New Zealand plug. The guys who sold it to us were, I believe, Malaysian, and it turns out they must have brought it from Singapore (which apparently uses British-style outlets). Not really the router’s fault, but not a good omen.

So I set it up and it seems to mostly work. The administration website is horrifically designed, but how often do you have to deal with it? All the time, as it turns out, as it doesn’t seem to hold persistent settings very, well, persistently. But it still mostly works, so OK.

But Sheila starts having problems going to certain web pages. Can’t check her e-mail. Can’t log into her bank account. Most any secure page just doesn’t work. Just sits for a few seconds…then nothing. My Mac, however, had no issues (which of course led to more gloating on my part).

Thought at first it could be some malware, but she’s pretty careful about that, and it didn’t feel like it. Seemed like timeouts, like the bloody router was dropping packets. (Unfortunately as a combo modem/router, we couldn’t take it out of the picture to verify.)

Futzed around a bit, did some Googling…saw others with similar problems but no solid solution. So finally I sit down to have a better look at it. Install Wireshark (what they’re calling Ethereal these days, if you were unaware or forgot like I do every time I learn the new name). Capture packets during a failure…and it’s immediately apparent: nice, black-highlighted lines, ICMP messages from the router, saying ‘packet dropped; too large for next hop, fragmentation required’. Yeah, MTU stuff, which I kinda guessed.

So I learn a bit about PMTUD, or ‘Path MTU Discovery’ protocol. It’s a way of dynamically optimising MTU to a particular destination by first sending larger packets, looking for responses saying, ‘nope, too big’, and sending again, making them smaller till they fit down the Intertubes.

In my case, MTU from the router out to the Internets was set to the provider-specified 1492, which sounds appropriate for a PPPoE DSL connection. The ‘too-large’ packets were 1500 bytes, which, checking my calculator, is larger than 1492. So, yeah, problem.

But the router told the computer that it’s all whack. Why don’t it listen?

Looking at the response ICMP packet a bit further, the info about the failed packet ain’t right! TCP sequence numbers don’t match and are huge. Checksum failed. On all of them. So it’s like going to the drive-thru and all you hear through the loudspeaker is garbled static. You try a few more times, “I WANT A CHEESE BURGER PLEASE”, but eventually you give up and drive off.

Okay, terrible analogy. But I’m guessing that the computer’s IP stack couldn’t correlate the response with the original packet, so it thinks it was just lost, and tries again a few times then gives up. (Why no problem on my Mac? Dunno…maybe it has a smaller max MTU. Maybe it doesn’t set ‘do not fragment’. Maybe it makes a guess at correlating, or decreases the packet size on retries if it sees no response. I’ll have to check it out.)

So, first thought: firmware upgrade on the router. I look and see it’s running what appears to be version 1.00 beta, which sounds old to me. So I go to dlink.com and find the download page and see a pretty-recent 2.00 version and think, ‘cool!’. Then I see the big warning saying how ‘this firmware is engineered for North American products only and using it on another product may render it inoperable’, and think ‘crap!’. So I check out the Singapore support page, and find a couple of inconsistent links to various downloads, with specific versions for Thailand and some for Singapore…is that really necessary?! Most of them are pretty old, and most seem to say ‘only for ADSL2 connections, breaks ADSL1’. Some appear to be for very-particular bug fixes, but they’re just in a plain directory listing, no info to go on. There’s a reference to the firmware shipping with it, which is still the same bloody beta version that I already have!

Eventually I get scared and give up on that path. Figure out a way to adjust global MTU on the machine to 1492, and all is golden. Not a great solution, but a workable one.

So what’s my point here? Well, if somebody else has this problem and Google points them here, maybe they’ll find a solution hidden amongst my ramblings. And also to note that you can just see the chaos and ad-hoc nature of sofware development at D-Link, from the outside. From the terrible design of the software, from seeing the (presumed) bugs in the software, from seeing that they have different software for every bloody country they sell to! I’ve been in enough bad-enough development environments to know the signs, but for me they’ve had the good grace to collapse before going to market.

Or, in short: I think I’ll avoid buying another D-Link product in the future.

Arbitrary Quote

Truth is, I’ve been spring-loaded my entire life. Life gets shot at me from point-blank range and I just shoot straight back. Which is okay if you’re John Wayne or De Niro saving the world, but when your chief nemesis is bubble-overrun it’s slightly less convincing. When you’re younger, you think you’re the only one. Special. Then you start seeing spring-loaded people everywhere. Sitting on buses. Kicking their dogs. Beating their children unconscious. Road rage. Trolley rage. Pew rage. Soul rage.

Overdue new Releases, Matt Johnson

This from the book I’m currently reading, and rather enjoying, by a local Wellington author.