I use Vim almost exclusively for my text editing. Every now and again I need to get the contents of the file into the clipboard so I can paste it into something, like a browser window maybe. Highlighting with a mouse is fine for occasional use, but if you’re doing it frequently or if you need to select more than is visible on the screen it’s annoying.
There’s a handy command called xclip that lets you put things directly into the clipboard. For pasting into a browser with Ctrl-V you need to specify the right clipboard (there’s more than one), but something like this:
xclip -selection clipboard < filename
What I’ve found particularly useful is to write content directly to the clipboard from Vim:
:w !xclip -selection clipboard
Of course you can be more specific, and put ranges like numbers in if you like (this writes the six lines from line 5 to 10 into the clipboard):
:w5,10 !xclip -selection clipboard
It’s worth remembering that xclip needs to be able to talk to your X session to get the data in there. Not a problem locally, but if you’re running Vim remotely you need to make sure you’ve got X11-forwarding enabled over your SSH connection.
In work we use LTSP to boot up thin clients. Nothing runs locally on these thin clients, they’re used for either VNC or rdesktop access. One problem we had was that if the LTSP server was rebooted, the NBD client would lose its connection to the server and never re-establish it. This mattered because we want to be able to remotely shut down thin clients overnight.
So, we came up with a quick and dirty way of copying the root filesystem image locally to the thin client. Your thin client needs to have enough free RAM to store it all (these are diskless), and in our case the image is about 290MB while the thin clients have 1GB RAM. We don’t need much RAM for the actual system just to run X and VNC/rdesktop.
ltsp-chroot, then add the following contents to a new file called
# Copy entire image from LTSP server, through NBD
case $1 in
dd if=/dev/nbd0 of=/dev/ltsp.img
nbd-client -d /dev/nbd0
mv /dev/nbd0 /dev/nbd0.orig
ln -s /dev/ltsp.img /dev/nbd0
It’s very simple. We’re adding a small script to the “INITial RAM FileSystem” that Linux uses in the early stages of booting up. It uses
dd to pull the entire root filesystem from the LTSP server, which in practice only takes a few seconds. Then it stops the nbd-client, and moves our new local copy into the place where the rest of LTSP expects to find it.
When you’ve create the file, and still within the ltsp-chroot, run:
to rebuild the initramfs.
Finally, exit from the ltsp-chroot and from the main server run
to copy the new initramfs in the TFTP server’s path.
For the last day I thought the disks in my home server sounded a bit busy. Usually they’re so idle they spin down for much of the day, but they were clicking away when they shouldn’t be. Later the same day I got delightful logcheck mails saying things like “I/O error, dev sda, sector 976755863”.
Since there was already a spare disk in the machine, I ran “zfs replace tank dying-disk spare-disk“. It seems the sick disk was actually nearly dead, and output of zpool status showed the resilver was running at 1.4KB/s, and slowing down. It would have taken about 7 years to complete at that rate. Clearly it was going to take a while, even if it did improve. Bear in mind, it took about seven or eight minutes for the zpool status to complete, instead of being instant.
In the end, I ran shutdown -h, and when that got stuck just halt (which took a minute to respond). I pulled the disk out, powered the machine back up and the resilver had continued from where it was. ZFS has noticed the disk is missing, but it’s just using the remaining good disk in the mirror to complete.
Interestingly, despite passing a scrub two days ago there are now data checksum errors on the remaining disk. zpool status -v shows me the filename, and fortunately it’s a file I can trivially recreate.
I wonder if I’d manually used zpool attach basing it on the surviving disk, would it have resilvered normally, maybe falling back to the dying drive when it found these errors, possibly without losing any data? Then I’d be able to detach the dying disk. I’ll have to wait until the next disk dies to try that…
While setting up a second bridge for virtual machines on an Ubuntu server, I ended up with this error:
# ifup br1
Waiting for br1 to get ready (MAXWAIT is 3 seconds).
RTNETLINK answers: File exists
Failed to bring up br1.
I found lots of different explanations and possibilities described, but in my case it was simply that when I copied the previous bridge definition, I had kept the gateway line. It seems you can only have one gateway defined in your interfaces file, so by removing that it all works for me now.
Hopefully somebody finds this and it turns out to be the same fix they need.