At that point, I migrated to Linode, but they never quite felt as dedicated to doing things “right” as Slicehost had been. Over the next few years, they experienced several majorsecuritybreaches, and still have not been able to explain to my satisfaction how these came about or what steps are being taken to avoid them in the future.
I also noticed that my harddisk … seems to be going, so keep your fingers crossed. I thought I’d better upload what I have now, rather than notice that I lost everything when I get back to work on Monday.. (Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it ;)
Having lost many a hard drive myself, most recently just last month, I’ve spent a lot of time looking for a good, modern Internet-based backup solution. Years ago, Mozy began offering such a service, and I tried it out at the time, but for whatever reason, I didn’t keep using it. Other products have since appeared, such as Dropbox and Google Drive, and with the popularization of the concept of “The Cloud,” focus has shifted from a simple replacement for tape-backups, to a full document store. Features such as online documentediting have also become common. Synchronization to multiple machines is also a becoming common use case.
Many of these products offer a free service tier that is sufficient for storing a reasonable amount of data - perhaps not a full disk image with applications, but at least my entire documents folder will fit. So the other day I decided to try out Dropbox and Google Drive. I signed up for a free account on each, and installed the free software on my......
I’ve maintained this blog off and on for a few years now on a self-hosted
WordPress instance, but that has felt a bit bloated and unwieldy to me.
(Especially the awkward WYSIWYG post editor) so I’ve begin porting the blog
over Jekyll, hosted on GitHub. It’s currently a work-in-progress, but I’ve
made it far enough to push out an initial version while I work on migrating
the remaining posts, porting the Disqus comments, tuning the CSS, and
writing about the experience.
I actually wrote this up a few months ago as a reply to a blog entry, detailing my own personal experience and variations on this process, but I’m reposting it here on my blog now for my own reference and for anyone else who is interested. This process can, of course, also be adapted to PXE-boot other things such as a CentOS Kickstart install.
Here are the steps I used to successfully PXE-boot OpenBSD from OSX. My MacBook Pro is connected to the Internet via the AirPort, and my soon-to-be OpenBSD box is connected to my Mac via the Ethernet port. As a slight added complication, my WLAN uses the 192.168.2.x subnet (which conflicts with the address range generally used by OSX’s Internet Sharing), so Internet Sharing needed to be adjusted to use a non-default address range.
So first, I fixed my Internet Sharing address conflict:
Disable Internet Sharing
Close any System Preferences windows
Edit /Library/Preferences/SystemConfiguration/com.apple.nat.plist to add: SharingNetworkNumberStart 192.168.3.0
Modern versions of Perl provide support for threads. On *nix systems, this is implemented via the system’s pthreads (AKA “POSIX threads”) support. This means that each thread looks to the operating system like its own lightweight process. POSIX threads on linux can run on separate cores, can have separate process ID and names, and can receive separate signals. Most of the existing Perl documentation isn’t very clear on how to manage these special attributes from Perl.
First of all, as per the perlvar manpage, it is possible to set the process name by modifying $0. In modern versions of Perl (>5.8), this affects two separate system properties: the process command-line, and the process name. The difference between these two properties is important to understand.
The process command-line is part of the original memory block the OS allocates when creating a process. This is usually used initially to pass the command-line options into the process, and is what populates @ARGV. After the process is started, it can overwrite this area with whatever it likes, and the result will be displayed by certain process-management tools such as ps and top. The size of this data is limited to whatever space the OS originally allocated for the *argv array. Because all of the threads in a single process share the same *argv area, there can be only one command-line across all threads of the process.
Additionally, Linux keeps some internal kernel metadata about each process,......
Traditionally, website resource management has been handled more or less opaquely by the browser. A webpage would declare a number of linked resources via src or href attributes on various HTML tags, and these would be fetched by the browser as it saw fit. Later, some control of resource caching was provided via server-side HTTP headers, but the browser remains the mastermind.
There are three main problems with this way of doing things. First, the caching of resources is not easily modified - once the browser has fetched a resource with a Cache-Control header defined, there is no telling if or when it will decide to refresh its copy short of the pre-defined TTL. Second, the order and progress of the resource fetching cannot be easily monitored or controlled. Finally, if you have used Cache-Control: no-cache, even though HTTP 304 responses may save you the bandwidth of re-downloading some large files, the browser still needs to fire off a separate request for every resource. Of course there are hackish work-arounds, such as appending a nonce to the query string to force a refresh, or attaching onLoad events to all img tags on the page, or using a spritesheets to bulk images together into a single HTTP request, but the usefulness of these is limited.
The Bash HEREDOC feature is quite useful when you need to script the stdin input to a command, however not all commands can be coerced into reading their input from stdin. Some commands require that you supply filenames from which to read some of their input. In these cases, your Bash script could create a temporary file on-disk, write some content to it, execute the command, and then delete the temporary file afterword. This works of course, but wouldn’t it be nifty if there was a way to do this all at once via a HEREDOC? In fact there is, but it is not immediately obvious.
The standard HEREDOC syntax uses a double-left-angle-bracket to direct input to stdin. Like any other Bash redirection, this is just the standard form with the assumed target filehandle (0 aka stdin in this case). You can specify a different filehandle if you like. Filehandles 1 and 2 are stdout and stderr respectively and are thus not useful for input. But what about 3? Filehandle 3 doesn’t normally exist at process creation, but Bash can create it if you ask.
So we have something like this:
Well, that isn’t very useful in itself because the command probably isn’t aware that it should look at file descriptor 3 for input, but that’s where the next trick comes in: There is a directory on Linux under the /dev hierarchy which allows access......
This doesn’t quite work in some more complex situations though. For example, in a configuration where the SSL layer is handled by a reverse-proxy/load-balancer like an F5Big-IP, the built-in Apache HTTPS mechanism is useless because all connections arrive at the Apache box as HTTP. In this case, the HTTPS forcing could be accomplished with a rule on the proxy itself, but this can be complex to maintain since it places the configuration further from the data in question. Alternately, if the proxy has an option to enable the X-Forwarded-Proto header, you can still do the redirect at the Apache layer using......
Recently I had been using Tomcat as a Java Servlet container for a project at work. This works well in the context of a tightly-integrated set of servlets and JSP pages like a typical website, but the project I was working on is intended to be a self-contained module which presents an HTTP API and should be easily deployable without worrying too much about shared settings and shared libraries on the target box. I also wanted the ability to profile, debug, and run jUnit tests on the component from within Eclipse, without requiring an additional, separate deployment to a Tomcat server (even if it is a local Tomcat server on my dev machine.
So I decided to switch to a design involving embedding a Jetty container into a plain Java app. This would give me the ability to have a clean deployable artifact with no external dependencies, and also to launch the servlet container and the HTTP tests against it from the same jUnit script. Sadly, the documentation for embedding Jetty is not the most comprehensive.
I just pushed a new project up to
GitHub. It’s the beginning of a console
framebuffer graphics library for
FreeBSD. While Linux has SVGALib, and BSD used to
there doesn’t seem to be anything current for this purpose, and the
documentation for the kernel
is minimal (and even these seem mostly to just be
thunks to the old
interrupt 10h functions). I need this
for a certain BSD-based side project, and since
nooneelseseems to have it working at the
moment I had to do it myself.
Then say this website needs to add another, separate progressive-enhancement script. Or two. Or......
One of my friends has a habit of
posting obfuscated strings as Facebook comments when he gets bored.
Being lazy, I don’t
usually feel like spending my time doing things like converting binary
to decimal or hex
Sometime last year I came across Node.js, the server-side port of Google’s V8ECMAScript engine. At first I was interested in it mostly as a novelty to allow things like form validation library portability from client to server side. After trying it out and reading through the documentation in more detail and watching Ryan’s presentation, I became a lot more excited about it.
Since Node.js is essentially just a set of bindings for V8 to allow it to interact with a server type environment instead of the more familiar DOM, Ryan had complete freedom to implement all of the core I/O API calls in a way that relies exclusively on callbacks to avoid blocking. It is this clever trick that allows Node.js to perform extremely well in unexpected roles such as webserver, while at the same time avoiding the complex select event loop logistics usually associated with single-threaded daemons. This......