Category Archives: Uncategorized

Falcon Controllers and Null Pixels

I have, upon prodding from the beloved, gotten into the programmable LED pixel Christmas light game. We’ve put together a display driven by a Falcon F4V3 controller board.

I recently discovered the need to insert a couple of null pixels into the display (mostly because I wasn’t smart enough to buy any Falcon F-Amps before getting to this point) due to too-long data lines between elements.

The documentation for the controller was not particularly clear on the way to handle such null pixels. I saw that the ‘String Ports’ page on the controller management console had a ‘Null’ column in the configuration table. The documentation says this about the column:

Used to define the number of nodes which will not light up at the start of the string. These nodes will pass data to the next node but will not light up. This is useful when there is gap between display elements larger than your pixel spacing and you don’t want to cut/splice the string between the nodes to accommodate for this.

That description implied that these pixels would be invisible to the controller – that they would be treated as if they weren’t present. However, my experimentation with the setting implied otherwise: every time I increased the ‘Null’ setting, a pixel would fall off the end of the model.

My assumption was that a Null pixel was still consuming data from the strand (or, more precisely, that the controller was not inserting an extra pixel’s worth of data in the stream) but it was being set to ‘always off’. Further experimentation showed that was wrong: pixels added to the ‘Null’ column are invisible to the model. However, the number of pixels in the strand needs to account for Null pixels.

That is, if you set the ‘Null’ count for a strand to 5, you need to increase the ‘Pixel Count’ for the strand by 5 as well. Doing so adjusts the ‘End Channel’ for the strand up by a commensurate number of channels, but that doesn’t seem to reflect reality: I’ve added Null pixels that would (according to the controller interface) cause a channel conflict with another strand, but the model seems to work correctly.

Spring Scoped Proxies and ‘prototype’ beans

I thought I understood how Spring deals with prototype-scope beans: when injected into another bean as a dependency, or explicitly retrieved from a BeanFactory via getBean, a new instance is always created. Thus, when injected as a dependency the prototype bean effectively shares the same lifecycle as the bean it’s injected into (except that destroy listeners aren’t called).

That’s correct if scoped proxies aren’t being used, but is not correct if they are. If scoped proxies are enabled, a scoped proxy is instead injected, and a new instance of the prototype bean is created every time the proxy is accessed! I don’t see that behavior discussed in the Spring docs. It seems like it fulfils the goals of method injection in an easier fashion.

While that behavior might be useful at times, it seems a rather subtle way to enable it. If explicitly declaring the proxyMode when declaring bean scope, it’s obvious enough, but if declaring the type of scoped proxies you want across the system using scopedProxy on a @ComponentScan annotation (or scoped-proxy on a <context:component-scan> tag), this drastically changes the behavior of all prototype-scope beans it picks up.

Perhaps the best practice is to always explicitly declare a proxyMode attribute on all @Scope("prototype") annotations, to make the decision explicit.

Spring @Async annotations and interface-based proxying

The Spring Framework for Java supports an @Async annotation which specifies that the annotated method should be run asynchronously in a background thread. (See here for details.)

The docs did not [obviously] point out some important notes around usage of this annotation with interface-based proxying. After spending some time stepping through Spring code, I discovered the following:

When Using Interface-Based Proxying, Classes with @Async Methods Require Interfaces

Support for the @Async annotation is implemented using a Spring dynamic proxy. When Spring is configured to use interface-based proxies (as opposed to CGLib proxies), in order for a bean to have dynamic proxy:

  1. it must implement an interface, and
  2. you must inject the bean into dependent classes referencing the interface rather than the concrete class.

The same rules apply when using scoped beans. However, if you don’t fulfill the requirements in that case, you get an error at startup. With @Async-tagged methods, I don’t see any warning in the log, and it will simply not run the method asynchronously.

@Async Annotations Specifying an Executor Must Be On the Interface

If you specify an explicit executor to use in the value of the @Async annotation (e.g., @Async("myCustomExecutor")), the annotation must be on the interface method definition, not on the concrete method implementation.

If you put the annotation on the concrete method implementation, it will be run asynchronously. However, it will always use the default executor. This is because the code to determine the executor to use (in org.springframework.aop.interceptor.AsyncExecutionAspectSupport.determineAsyncExecutor(Method)) is looking for annotations on the method and class being referenced by the caller, which is always going to be the interface method due to the rules in the previous section.

Upgrading your Time Capsule hard drive, and migrating data

The 500GB disk in my first-gen Time Capsule became hopelessly full, with two Macs backing up to it and a slew of other media files stored on there to boot. Poking around on teh Google turned up some good tutorials on the mechanics of actually replacing the drive, particularly this one. Doing so is a pretty straightforward process for anybody who has ever cracked open a computer.

However, this Time Capsule has loads of data on it that I wanted to preserve, preferably in a quick-and-easy fashion, and I didn’t find much information on how to go about performing such a migration. It turns out to be not too difficult, but I made a few time-consuming missteps on the way. Hence my gift to you: everything you need to know to do it yourself.

First, a note on the drive

Without doing much research, simply thinking that quadrupling my capacity was a good goal, I picked up one of these drives, a WDC 2TB drive with EARS technology. I then found this comment, among others, that indicated Time Capsule might not support these drives, at least not without some jumpering.

I’m happy to report that my first-gen Time Capsule (running firmware 7.5.2) has had no issue dealing with that drive straight out of the box.

Partitioning

The Time Capsule requires a specific partitioning scheme to work. On my first attempt, I put the new drive in an enclosure, plugged it into my Mac, and gave it a single HFS+ partition using Disk Utility. When I put it into the Time Capsule, the disk was recognized but AirPort Utility reported a ‘problem’ with the disk. At that point using the ‘Erase’ button in AirPort Utility caused it to partition it in the required fashion. Doing that is the easiest and surest way to create the right disk structure, but it does require swapping the Time Capsule disks a couple of times before the process is complete.

It might be possible to create the required partitioning scheme manually prior to putting the drive into the Time Capsule, to avoid a swap. I’d be interested to hear from anybody who tries, to know if it’s practical. Here is what Time Capsule put on the disk (as discovered through Disk Utility after I took the drive out of the Time Capsule again and put it back in the USB enclosure):

  • At the start of the drive, a partition named APconfig, type ‘Mac OS Extended (Journaled)’, size 1.07 GB (1,073,741,824 Bytes, to be exact). This partition had a single non-hidden file in it, apparently a backup of some of the AirPort configuration data.
  • Immediately following, a partition named APswap, type ‘Mac OS Extended (Journaled)’, also of size 1.07 GB (1,073,741,824 Bytes, to be exact). This partition had no non-hidden files in it.
  • Finally, a partition filling the remaining space on the drive, named the same as the Time Capsule was named (via the AirPort utility). This partition was also of type ‘Mac OS Extended (Journaled)’.

The third partition is the only one visible to the user after the drive is in the Time Capsule, and contains backups/other user data.

Archiving data

My main goal was to migrate the data from the old Time Capsule disk to the new disk. Happily, AirPort Utility has a handy ‘Archive’ button that will help you do just that. It will only copy data to a USB-attached hard drive, so you’ll require a USB drive enclosure to perform the migration. To start the migration:

  1. Put the new, partitioned drive into a USB enclosure.
  2. Plug the USB enclosure into the Time Capsule.
  3. In AirPort Utility, go to the ‘Disks’ pane.
  4. Click the ‘Archive’ button.
  5. Select the third, large partition on the new disk as the target.
  6. Start the archiving process.

Copying my 460GB of data took a number of hours, so it’s best to be able to leave it overnight.

Once the archival is complete, you’ll need to do a bit of fiddling with the file structure to make the data usable by the Time Capsule. The archival process will put the entire contents of the main disk partition into a folder called “<Time Capsule name> Archive” (or something to that effect). Underneath that, I had a folder named ‘Shared’, which contained the shared-folder Time Capsule data. Since I have configured my Time Capsule to use user accounts, there was also a ‘Users’ folder, with another folder for each account inside of that.

These ‘Shared’ and ‘Users’ folders need to be in the root directory of the disk for them to be usable by the Time Capsule. So, you’ll need to:

  1. Plug the USB enclosure into your Mac
  2. Open the drive corresponding to the third, large partition on the disk
  3. This partition should contain a single folder, ‘<Time Capsule name> Archive’. Move the contents of this folder into the root folder of the drive, then delete the (now-empty) ‘Archive’ folder

Once this is done, the drive will be usable in the Time Capsule, and all of your old data will be accessible. Swap disks in the Time Capsule as discussed in the tutorial listed above, and off you go!

 

Movin’ on Up

Until now, this blog used Blogger. The reason for this was, primarily, lethargy. While not notable for much else, this blog is at least venerable, dating back to November of 2001, back before blogs were cool, man. Blogging platforms were rather more limited back then (or at least I was oblivious to better options), so I chose Blogger. I never bothered to change services since, well, I rarely bothered to write in it.

I’m also one of approximately three people who were using Blogger’s ‘publish via FTP’ option. I recently got an e-mail from Google saying that they were discontinuing the FTP publishing option because it was annoying them and because approximately three people were using it. With this turn of events I decided it was time to renovate a bit, so I installed WordPress and imported everything into that.

Having never used WordPress before, it’s slicker than I expected and I’m pretty impressed. I look forward to not writing in it.

Solar Energy

For awhile I’ve intended to quantify the rate of energy received by the Earth from the Sun and how that relates to our current rate of usage, to better understand the practicality of solar power generation on a massive scale. Wikipedia comes to the rescue (via dKos), summarizing that data handily and with cool graphics. The percentage of the Earth covered by those black discs is heartening: for the vast amount of energy we consume, it’s still minute compared to what’s given to the planet every day.

The State of Homedebtorship in America

Wow:

  • Nearly one in 10 households with a mortgage had zero or negative equity in their homes as of September 2005, according to First American Real Estate Solutions, an arm of title-insurance company First American Corp. The study of 26 million homes in 36 states and the District of Columbia found that one in 20 home borrowers was upside-down by 10% or more.
  • The situation is even grimmer for recent borrowers. Of those who bought or refinanced homes in 2005, 29% had zero or negative equity, and 15.2% were underwater by 10% or more.

I’ve for some time been a housing bear, but these numbers are worse than I’d have predicted. And this was at the peak of the market!