I use Vim almost exclusively for my text editing. Every now and again I need to get the contents of the file into the clipboard so I can paste it into something, like a browser window maybe. Highlighting with a mouse is fine for occasional use, but if you’re doing it frequently or if you need to select more than is visible on the screen it’s annoying.
There’s a handy command called xclip that lets you put things directly into the clipboard. For pasting into a browser with Ctrl-V you need to specify the right clipboard (there’s more than one), but something like this:
What I’ve found particularly useful is to write content directly to the clipboard from Vim:
Of course you can be more specific, and put ranges like numbers in if you like (this writes the six lines from line 5 to 10 into the clipboard):
It’s worth remembering that xclip needs to be able to talk to your X session to get the data in there. Not a problem locally, but if you’re running Vim remotely you need to make sure you’ve got X11-forwarding enabled over your SSH connection.
SSLv3 is broken, and you shouldn’t use it any more. However, there’s still lots of old hardware with embedded web interfaces that use it, like air conditioners, UPSs and other stuff. Essentially this hardware has been abandoned by their manufacturers, and there’s no hope of there being a firmware update for it.
At the same time browsers are dropping support for SSLv3, such as Firefox 39 (released 2015-07-02), and you can’t access your devices. One option, even worse than using SSLv3, is to disable SSL entirely and use clear, unencrypted HTTP, but you don’t want that. Instead, you can use a reverse proxy. On an internal webserver, we simply added these lines (these were for Apache 2.2):
Now when you go to https://internalwebserver/olddevice/ it will pass through your request, encrypted, to the old device and everything is fine. You can add as many as you need. Of course, you should be using TLS on this web server, or it’s all a bit of a waste of time. 🙂
In work we use LTSP to boot up thin clients. Nothing runs locally on these thin clients, they’re used for either VNC or rdesktop access. One problem we had was that if the LTSP server was rebooted, the NBD client would lose its connection to the server and never re-establish it. This mattered because we want to be able to remotely shut down thin clients overnight.
So, we came up with a quick and dirty way of copying the root filesystem image locally to the thin client. Your thin client needs to have enough free RAM to store it all (these are diskless), and in our case the image is about 290MB while the thin clients have 1GB RAM. We don’t need much RAM for the actual system just to run X and VNC/rdesktop.
ltsp-chroot, then add the following contents to a new file called
# Copy entire image from LTSP server, through NBD
It’s very simple. We’re adding a small script to the “INITial RAM FileSystem” that Linux uses in the early stages of booting up. It uses dd to pull the entire root filesystem from the LTSP server, which in practice only takes a few seconds. Then it stops the nbd-client, and moves our new local copy into the place where the rest of LTSP expects to find it.
When you’ve create the file, and still within the ltsp-chroot, run:
to rebuild the initramfs.
Finally, exit from the ltsp-chroot and from the main server run
to copy the new initramfs in the TFTP server’s path.
For the last day I thought the disks in my home server sounded a bit busy. Usually they’re so idle they spin down for much of the day, but they were clicking away when they shouldn’t be. Later the same day I got delightful logcheck mails saying things like “I/O error, dev sda, sector 976755863”.
Since there was already a spare disk in the machine, I ran “zfs replace tank dying-diskspare-disk“. It seems the sick disk was actually nearly dead, and output of zpool status showed the resilver was running at 1.4KB/s, and slowing down. It would have taken about 7 years to complete at that rate. Clearly it was going to take a while, even if it did improve. Bear in mind, it took about seven or eight minutes for the zpool status to complete, instead of being instant.
In the end, I ran shutdown -h, and when that got stuck just halt (which took a minute to respond). I pulled the disk out, powered the machine back up and the resilver had continued from where it was. ZFS has noticed the disk is missing, but it’s just using the remaining good disk in the mirror to complete.
Interestingly, despite passing a scrub two days ago there are now data checksum errors on the remaining disk. zpool status -v shows me the filename, and fortunately it’s a file I can trivially recreate.
I wonder if I’d manually used zpool attach basing it on the surviving disk, would it have resilvered normally, maybe falling back to the dying drive when it found these errors, possibly without losing any data? Then I’d be able to detach the dying disk. I’ll have to wait until the next disk dies to try that…
While setting up a second bridge for virtual machines on an Ubuntu server, I ended up with this error:
# ifup br1
Waiting for br1 to get ready (MAXWAIT is 3 seconds).
RTNETLINK answers: File exists
Failed to bring up br1.
I found lots of different explanations and possibilities described, but in my case it was simply that when I copied the previous bridge definition, I had kept the gateway line. It seems you can only have one gateway defined in your interfaces file, so by removing that it all works for me now.
Hopefully somebody finds this and it turns out to be the same fix they need.
There’s a character called a “swung dash”, which is like a long, gently wavy hyphen. In Unicode it’s U+2053, and if your font supports it, it might look like ⁓. In some fonts it looks a bit flat, I think.
I was asked recently to implement one in LaTeX. Checking the Comprehensive LaTeX Symbol List there is just a simple reference to use a mathematical \sim, but \sim isn’t quite the same. It’s a much curvier shape, and just doesn’t look right.
So, here’s my alternative command. Basically it’s just an extra-big tilde pulled down a bit. It’s not dramatically different, but it’s closer to what I think it should be, and the person asking for it is happier with it:
One of the projects in work is a bibliography, a painstakingly-produced list of publications for a specific research area. It wouldn’t be so bad if it was all English-language, but much of it relates to linguistics. There are diacritics all over the place (sometimes stacked on top of each other), mixtures of languages and alphabets within the one title, individually-italicised words, small-caps, superscript, subscript, and combinations of the above. This project started 12 years ago with the intention that, when complete, it would be a printed book. As a result the markup chosen which could handle the mess of markup necessary was LaTeX (remember, this was well before the days of markdown or similar). There could be another ten years of work to go.
The website was designed with a simple alphabetical navigation system — you can browse through authors or journals, etc, by initial letter and it was easy enough to find what you wanted. There’s on-the-fly TeX-to-HTML conversion for the website which uses a recursive regular expression which makes me feel both guilty and proud. But now, after 12 years, there are about 13,000 entries and the navigation has become unwieldy. They want a real search.
I threw together a very quick search using a few quick SQL searches (did I mention I work one day per week with support queries of ~20 users to keep happy?) that did a reasonably tolerable job, but really they want to be able to specify multiple fields, wildcards and all that sort of thing. A search of raw database tables won’t work because of all the tex markup.
A quick google and my first stop was Apache Solr, which seems to be one of the best-known search engines there is. It has all the features I could think of, and lots of features I hadn’t thought of. My first thought was that it’s huge. It’s a standalone Java application, the zipped download alone is nearly 150MB, and there was a pile of documentation. I succeeded in building an XML-formatted output (including lots of embedded HTML) from the bibliography and got it into Solr, and tinkered with some searches.
In the end, though, I couldn’t shake the feeling that Solr was just too big. There was a lot of configuration. Just setting up Solr as a standalone daemon was a task in itself. Many of the potentially useful features were killed by the nature of our data; language-specific stemming and so on doesn’t work very well if you’ve got English, Irish and Greek mixed together in the one title.
So I started to look around again and came across Xapian. It’s smaller and more lightweight, has the features I need, and has direct bindings for several languages, including PHP which is what I need. From what I can tell this means I don’t need a separate daemon, the documentation is written with code samples in Python (yay!), and I’m planning to use xml.etree.ElementTree to re-use my existing XML output and stick it into Xapian.
I said I’d have a very basic working example on one or two fields in about three weeks (which for me is the cumulative “spare” time from three busy working days). Wish me luck…
In my last post about the testing goat I mentioned there’s now an official Unicode codepoint for “GOAT”, U+1F410.
At the time, I tried typing it in. Under Linux you just press ctrl-shift-u (you’ll see an underlined letter u), type the hex digits for the code you want, press space and continue on. Easy. Having installed the free Symbola font, I could see my little goat in the editor. Happy Days!
Until I went to preview the post, at which point my little goat, and everything after it, had disappeared. Fortunately it was the last thing in my post, but if it was higher up I’d have lost some of my work. Not good! It was late and I was tired so I left it out, a little disappointed.
So, looking again this evening I found that there’s a known problem that WordPress gets confused if it sees a Unicode character above U+FFFF. If you install the Full UTF-8 plugin, it works again. Without a doubt, this plugin, or something like it, should be merged into the core. Right now.
PHP and Unicode
In my job I have the dubious pleasure of maintaining a very old PHP application. Several hoops are jumped through to keep UTF-8 characters intact, but the hoops still work so I generally just leave it alone. This WordPress issue just had me googling again, and it seems to confirm that PHP (which is the language WordPress is written in) still doesn’t support Unicode natively. Really. In 2014.
It seems that Unicode support for PHP was first proposed in 2005 for what was planned to be PHP 6. Nine years later, and we’re just at 5.6.1. I came across this presentation on Slideshare from 2011 describing how the PHP+Unicode project reached a certain point and just ran out of steam. It seems nothing has happened since.
The nine years of bad history associated with the name “PHP 6” even has people suggesting that the next actual major release of PHP should be called “PHP 7”. It’s that bad.
Conclusion, for now
That PHP application I maintain is well over ten years old. It’s fairly stable, but has accumulated various bits of cruft over time. Adding new features is awkward and really it needs a rewrite. Since it uses lots of international characters I’d really like clean Unicode support, so I’m strongly drawn to using Python 3. It’s nearly 6 years old and supports Unicode properly. Now I’ve to pick a web framework. I’ll probably have a go with Django for now, simply because Harry’s TDD book uses it.
Oh and finally, just because I can, even though WordPress doesn’t want me to, here’s a goat: 🐐
I’m not new to Python at all. I still have a copy of André Lessa’s Python Developer’s Handbook, which the receipt says I bought on 15th March 2001, and covers Python 1.6. Unfortunately in all the years since I’ve never used it much. My postgrad studies mostly used Verilog and my day job generally involves bash scripts and maintaining some really old PHP.
Still, it’s a language I feel I want to use a lot, and I’ve attended the last two PyCon’s in Ireland. At PyConIE 2013 I went to a tutorial on Test-Driven Development by Harry Percival, and at PyConIE 2014 I won a copy of his book Test-Driven Development in Python. I say won, but simply there were 40 books being given away (20 of these, and 20 of High Performance Python) and I was 40th in the queue. It feels like winning something, at least 🙂
Anyway, at the tutorial 2013 Harry made reference to the “Testing Goat”, and I thought it was just a whimsical idea of his, but the goat was back in 2014 and it’s on the cover of his book.
A bit of googling and it seems the Python Testing Goat is a thing.
As best as i can tell, the Testing Goat was a running joke at PyCon 2010 (see here) and it’s become a mascot for Python Testing ever since. There was even a successful campaign to have O’Reilly put a goat on Harry’s book instead of the expected snake.
2010 was also the year Unicode 6.0 was released, which added (amongst other things) U+1F410 GOAT. Surely not a coincidence?