Some people give the impression of thinking the purpose of my life to make a mark upon the world, through really excellent system administration I suppose. I don't understand how someone could hold such a ridiculous idea. Does it make you feel better to think I have a ``passion'' for fixing your broken Outlook .PST files? fuck off. seriously.
However we do need to communicate with each other, and take some positions on issues, so we need pages like this, and since I've written it I suppose it'll be associated with my ``identity'' somehow. Please try to keep it in context, though, and don't make up little capitalist fanfic about my life to make yourself feel better. TIA.
Ok. There are five sections:
At best, I've found people usually read about a third of an article, then post a link to it from their blog saying ``He has some good points.'' It's hard to get through an entire rant. There's still something wrong with my writing style. Sorry for that.
They're expensive, but you can either just pay, or you can get them used on eBay, especially if you can find yourself some ExtremeWarez or IOS imagez to upgrade them without having a service contract. :)
The biggest problem with L3 switches is their fans. They're fucking noisy. Can someone trick up a watercooling mod for an Alpine 3808? That would please me immensely.
And I've become sad and disappointed.
NetBSD themselves, the first free Unix to deliver a useable Scheduler Activations, has retreated, too. They state most prominently this reason: they never finished their implementation enough to work on SMP machines (much less NUMA :) ). The real reason seems to be about the same as Sun's, in that implementation complexity drove their ultimate decision. If they'd started with Unstable Threads, maybe they wouldn't have had to rip it out! We'll never know.
This was originally part of my position on DSL vs. cable, but I now believe that there is only a weak relationship between the two issues.
This is why I think the anti-trust case against Microsoft is moot. With competing technologies like Java in the pipe, they don't have a chance. Some day, we'll all be using the same kind of computer, the Java Computer! HOO-ray.
There are still two ideas left in this rant that might be kind of interesting. First, the Bluetooth ``slate'' idea, which is something like the eBook reader Josh showed me that turns completely off when you shut it, but this one connects to your always-on celfone. It doesn't run any programs itself, just gives you a bigger screen and a two-handed keyboard for interacting with programs on the celfone. The programs are required to re-render themselves onto the celfone screen without losing your place if the ``slate'' is shut off or runs out of battery.
And second that futuristic celfone applications ought to borrow user interface metaphors from webapps like Gmail and Facebook, and from emacs programs like Gnus, not from Windows 95 like some of them do now. Maybe we shouldn't think so much about the celfone's small screen and keyboard, and instead think (1) it's always-on and emphatically single-user and expected to never crash, all unlike laptops and desktops, and (2) it's virgin territory, where we can implement a completely new platform---because the screen and keyboard are small people won't think to fuss and whine when it doesn't look exactly like their laptops. They haven't even learned yet to whine when theirs doesn't look exactly like their friend's celfone, which gives us a droolably fantastic chance to invent some truly new and creative designs, and change the way people think.
templates/admin/<app>/<model>/change_form.htmland overriding blocks found in
.../site-packages/django/contrib/admin/templates/admin/change_form.html(the admin app plus the templating system have cooperative path-searching features that will find this template automatically), but I don't want to do HTML development. Instead, I want to display uneditable fields using the exact same choice, foreignkey, datetime widgets as the editable versions, an idea which the Django developers obviously find ``unclean'' and won't support. There are some other attempts, but I found them either incomplete (they don't do the whole three-step thing my attempt does), or badly-factored (they do silly things like copy unchanged data over changed data to stop the user from changing it, which can break down in embarassing PHP-like ways if there's quoting and splitting going on in sublayers that their by-hand ``copy'' routine doesn't handle). I can't be bothered to properly reblog what I did, so just have a look at my admin.py and models.py if you want to do it my way.
I use this to remind myself how to get around Gentoo rebuilding snags more quickly. The reason I use Gentoo is that I like to turn on almost all of the USE flags, ones which binary distributions force to Off and then conceal because they'd be too much trouble when it came time to patch things. Another reason is I like to use Sun Java, because the competing Javas with full freedom I think are so crap you would be better off to use a different language competing with Java than try to get things done in these second-rate nonserious environments. I am probably wrong about that opinion though (see: Android).
There's also a highly useful bit in this cheatsheet about using serial console to interact with Linuxes inside VirtualBox zones. Likely you will want to use VirtualBox to build Gentoo, so you might like this. Serial console is the *only* reasonable way to run a Linux VM guest. If there is some kind of NX, SPICE, or VirtualGL access to the guest, it should go over the virtual network adapter, because we are working in an environment with full source. There's no reason for any other kind of attachment.
Solaris in particular has a well-deserved reputation for obstinance. OpenSolaris is getting better, but there are still some basic things you need to do right away to get Gentoo-like control of your machine, like ability to copy it using tar | tar, ability to login to it over ssh without bullshit, usw. This cheatsheet might help. but Solaris has become a lot less interesting in the last few months. I'll explain why.
The best feature of Solaris in my opinion is Branded Zones, which let you run Linux or older Solaris under newer Solaris, but Sun/Oracle has ruined it in two phases.
The promise of the feature was this: it allows you to run two different branches of the operating system simultaneously: a less-stable more-recent Ubuntu-like branch determines your hardware driver support, filesystem capabilities, NFS serving. A more-stable but glacial branch determines your large-bloated-app compatibility. You can install Zimbra or some drooling PHP monster inside the glacial Branded Zone and keep it safely encapsulated there while moving forward with the simpler parts of your system in the global (host) zone.
The split between exciting, unstable branch and glacial, stable branch is roughly at the kernel/user boundary, but not exactly. Some userland tools must match the kernel closely, like the ones that configure network interfaces, and it's in this area that Solaris branded zones are clever compared to FreeBSD jails or Linux vservers. They've a huge XML-crappo-based framework to start and stop the zones which can take over the work of some of these tools from outside the zone, transforming the glacial network config tools into query-only versions without setting ability, so that, in theory albeit not fully delivered yet, you might do flow marking, tunneling, link aggregation, vlan encapsulation, and even setting of things like TCP ECN using the zone framework which would invoke native userland tools on the exciting-host-branch. They've also a design pattern for injecting one or two native tools into the branded zone that skip past the kernel's emulation layer but can be invoked within the zone. A combination of the two patterns get the overall job done with minimal code. In fact, you can reduce the amount of code-writing and regression-testing by insisting: only the latest exciting branch and the latest glacial branch will work together. Then you actually do have to patch the glacial zones whenever you update the host kernel, but still the two remain separate branches so you achieve the primary goal: the freedom the exciting zone has to evolve is greater, and the chance of disrupting bloated software within the glacial zone is less.
It makes a lot of sense. I think it's the only thing that makes sense, for servers, any more. Single image is just too much work!
For Linux you might, for example, run CentOS domU's or HVM guests with a Gentoo or Ubuntu dom0. But this is FTL compared to the OpenSolaris approach in several ways. First: RedHat is working way too hard to make the CentOS sources because they are trying to be glacial and exciting at once, and Gentoo and Ubuntu are not as exciting as they could be if they reduced their package collection and focused on Xen, Storage, and Networking. Second: jail-like zones consume less memory, have direct access to the host's filesystem, can share text pages with other zones using CoW with no need for RAM dedup, can use different, fancier resource-capping strategies for CPU/RAM allocation since there is just one process scheduler for host and all zones, can boot and shut down faster, can have a leaner tighter networking code path, will work with minimal effort once the OS is ported to ARM CPU, and involve absolutely no Xen crappo whatsoever!
Zones do not have to be glacial. You can make ``native'' OpenSolaris/IPS zones, too, but this isn't nearly as useful because upgrading or downgrading the global zone can break them so you have to upgrade and downgrade the zones at the same time as the host system which is cumbersome, mistake-prone, hard-to-plan, likely to break apps inside the zones, and when dealing with flakey development builds you have to do a lot of upgrading and downgrading.
Unfortunately, Solaris 10 glacial/branded zones got a lot less interesting when Sun yanked their free-beer license from Solaris 10. They were always a little uninteresting because you never got any source code for Solaris 10: if you don't want to involve Linux, then you get to choose between stability and freedom. This leaves Linux branded zones and OpenSolaris branded zones.
OpenSolaris branded zones never existed. The only way to run an older Solaris build inside a zone was to run the binary-only SVR4-packaged Solaris 10. There isn't any way to pick a ``works-for-me'' unstable OpenSolaris branch and run it inside a glacial zone. If you use OpenSolaris zones, they have to be native and therefore have to match the host's version exactly. There actually was at one time a notion of stable branches in OpenSolaris: you could stick with the b
And now, Linux ``brandZ'' zones have been ELIMINATED as of b143 2010-06-11. The actual bug 6959264 is secret (so much for transparency) but you can see some related bugs: 6959276 and 6959270, which are about scouring the documentation to remove any evidence the feature ever existed. On the mailing list the developers say no one uses it anyway. The reasons no one used it:
I bet you'd heard of BrandZ because it was hyped like woah three years ago when Solaris 10 was announced as Sun's big comeback, and I have to say I found it highly compelling, but I bet you didn't know it has always been too crap to run Apache! so...Wow. It was basically like Wine for Linux, lets you hold onto a few ancient desktop apps, and not even do it easily because the architecture of zones was clearly never meant for that: it's for consolidating servers, not seamless desktops with legacy proprietary apps.
BTW, do NOT start any new OpenSolaris projects until the 2010.03 stable release comes out on opensolaris.org, and an unstable/incremental release newer than b134 comes out on genunix.org. Yes, a lot of good source *is* available, but (1) almost all of the developers are hired by Oracle, and (2) none of the other so-called ``distributions'' like Nexenta are doing full builds from 'hg'. They all use redistributable but binary-only pieces from the ipkg binary repository 'depot' that Sun builds, and this repository has not been updated since b134 (2010-03-09) until the time I'm writing this (2010-06-23). 2010.03 is supposed to be a stabl(er) build based on b134 and released on, like the name suggests, 2010-03-xx, but now a quarter of a year later the release is MIA. They're always late, but they've never been THIS late before, and if Oracle wants to take OpenSolaris proprietary again, to my judgement they will be able to do it without leaving any viable free-as-in-freedom forks behind.
04-22 18:11 --> javurscript events are all gebroken. 04-22 18:12 --> http://www.permadi.com/tutorial/jsEventBubbling/index.html 04-22 18:13 --> third button is suposped to say ``button down. button up. button clicked.'' in ff2 it says ``button down'' then gets stuck down. safari works. 04-22 18:13 --> second text field is supposed to disallow typing. in ff2 if you type fast some letters sneak in, but if you type slow they don't. goofy. broken/icky/unworkaroundable. 04-22 18:15 --> this is the type of shit why i stayed away from javurscript for a decade. no proper locking and unfixable race conditions. only good programmers, like the ones who work on kernels and DBMS's, are good at getting this stuff right. bad progammers blunder through it and get it wrong but seeming-to-mostly-work. 04-22 18:16 --> so there are all these bad-programmer languages where it's impossible to get it right, like javurscript apparently. bad programmers don't even notice. 04-22 18:17 --> if you want to _become_ a good programmer, and get trapped on one of those languges, you're fucking doomed. you will never be able to learn how to ``get it right.'' eventually you will become numb to the pointlessness of other people telling you how you shoudl start trying to get things right. 04-22 18:17 --> so looks like i'm headed for doom
For a first try, it doesn't seem to work with IET. I guess IET isn't offering the right mode pages to the Solaris initiator? There seems to be some config option for IET to fake a devid, but this would not be useful because it would tie exported devid's to fragile Linux device names that move around all the time, thus actually making things worse!
I've written an
/etc/init.d/ietd that rewrites IET's
config file, adjusting Linux device names to keep target names matching disks' probed
serial numbers. It makes sure iSCSI target names stick to serial
numbers even when devices move around. Better yet, if you for example
dd conv=noerror,sync copy a failing disk onto another, you can
edit the target's serial number in ietd.conf to give the new disk the
same iSCSI target name, which will make Solaris accept the new disk as
the old one. Without this trick, ZFS is so obstinate it won't look at
the data on that new disk: you have to move the new disk into its old
vdev with ``replace'', which will disregard the partially-correct,
partially-garbled contents of the disk instead of being ready to use
its partially-correct parts to heal potential checksum problems with
the other members of the vdev during the scrub/resilver.
I'd much rather crack open Solaris so I can alter its device-name / vdev binding arbitrarily, even for imported pools. This would handle moving disks from one iSCSI host (``discovery-address'') to another, which my scheme doesn't. But for now it's much more within my reach to write awk scripts for Linux. The script is sort of RTFS-documented, but here's example ietd.conf showing its syntax. In addition to setting the device name, the script is supposed to comment out the blocks of any drive that's missing.
There is a $0 version for Mac OS. If you have VirtualBox hosting Windows inside a Mac OS machine, maybe you would like to power off the guest and mount its NTFS disks using MacFUSE. This shows how.
BTW there is also a Solaris Nevada version of VirtualBox which I also use and like. I can't get RDP to work, though.
so, there is the trick for case-sensitive booting on PPC. I've been warned on the Internets that case-sensitivity breaks MS Word (seems ok so far) and Adobe CS3 (haven't tried it), so watch out.
The network has a lot of neat features that are useful most of the time but make it somewhat fragile and hard to upgrade. It has dynamic routing: BGP for the exterior routes and OSPF internally. We don't have an ASN---IPv4 BGP is mostly just an exercise, while IPv6 BGP is actually doing stuff. The network has IPv6 everywhere, and we have a fast tunnel to an experimental non-commercial network with diverse peerings. It deliberately uses hubs instead of switches. Some segments have an experimental arpless hack to resist ettercap. And upstream and downstream traffic is queued and conditioned by HFSC.
The diagram is way out-of-date. I have more machines, and others have fewer. We are using iSCSI, which means gigabit ethernet and switches, no more hubs. We use a lot of fiber, because for older equipment, fiber gigabit ethernet is cheaper than copper, and because we have equipment on the roof (not much yet, just one PeeCee). The switches are L3, so the main router now has only two Ethernet interfaces: one is a /30 link to the switch, and the other is for port monitoring. Those eight tulip ports worked very poorly---the thing could seriously only handle like 60kpps.
ping <hostname>guest's laptops.
I don't know if this makes sense, but you know, everyone has their own pet regional cabling style. Just look at the superior electrical wiring used in European buildings compared to the solid-core wire and daisy-chained circuits that Americans use.
Mine is a sort of ``third-world HAM radio operator'' style. I try to converge on two kinds of cable for everything---RJ12 and RJ45, both over solid-core CAT5---and make short pigtails converting between strangely-shaped connectors and these two. I've been pairing inner conductors with their overall-shield, and this seems to work fairly well for Maple Bus and PeeCee keyboards. I don't really understand baluns or RFI, but you can't tell me that my pigtails won't work because they do. Unfortunately, I am stuck with three kinds of cable instead of two because the pattern of assigning CAT5 pairs to RJ12 pins differs between telco/LocalTalk and my serial and Maple Bus pigtails.
There are a lot of claims about logging or journaling filesystems of various kinds being able to fsck quickly, or needing no fsck at all, or never losing data, but given the amount of regression testing these open source developers do and the low quality of the equipment with which they're forced to work (IDE with unpredictable write caching, which as you can see the guy's patch tries to work around, and IDE's habit of auto-downgrading to slower speeds, dropping past the minimum speed where the hardware does CRC checking, and then writing corrupt data over good data), it shouldn't come as a surprise these claims are often lies.
This guy performs a test that's brilliant in its simplicity: he boots up a Linux box, makes its filesystem busy with write activity, and then pulls the cord. Some of the so-called journaling filesystems he tried soon became so corrupted they wouldn't boot up---the autocheck demanded user intervention.
His experience agrees with mine using VxFS on HP-UX 10.20 and LFS on NetBSD. My experience with both of those two is, they are bug-ridden, and the O(n) fsck is a fiction because they all have an O(n^2) supercarefulfsck mode that you need to use all the time after a crash, or else your system will crash again due to lingering filesystem corruption. I like the cord-yanking test a lot, and am sick of having these bogus vendor/zealot claims parroted at me, so now I parrot this mailing list post back at them whenever I can.
I don't know if you realize this, but mailers like Postfix command the kernel to commit certain things to disk synchronously, and to tell Postfix after they've been written. Proper Unix kernels are expected to do this without lying, even when NFS is involved. If Unix keeps to its architecture, the overall Postfix MTA will not lose any mail, no matter how busy it is, no matter when you pull the cord. But with this half-journaling stuff that makes integrity claims you later find are untrue, and this sloppy Windows95-grade IDE write caching, I think it's unlikely the promises Sendmail kept in the days of SunOS 4.1.3 are still kept by Linux MTA's today.
I need to write another rant about how people overestimate the benefit of RAID some day soon. The short version is: the times I've seen people disastrously lose data recently are not plain old disk failures. They're bugs in the filesystem implementation itself that cause corruption, or they're issues with IDE (yes, always IDE, never anything else) where there is no way for the host to know whether block <n> was successfully written to the disk or not because the bus just becomes confused and muddled, or where one failing/confused drive interferes with its partner ``Master'' or ``Slave'' on the same bus so it's difficult to isolate which drive is really bad, and where it's not even a matter of data being written or not written and the host knowing whether it was or wasn't, but rather corrupt data is written, often either without an error, or for the pedantic yes it is with an error, but with an so-called ``error'' that sometimes occurs during normal operation. When faced with these three classes of problems. RAID will mess up, and these are unfortunately the most common and realistic problems people have with homebuilt PeeCee IDE crap. People don't realize how dependent RAID is on accurate error reporting, no silently writing corrupt data, and the same sorts of synchronous/atmoic writes that journaled filesystems need. They think it's like TCP and will just magically filter out bad disks like dropped packets. Not only does RAID just not work that way (Something went wrong so you couldn't write to one of the two RAID1 disks. Then you lost power. How do you know which mirror is the good one?) but it doesn't protect you from broken so-called ``journaling'' filesystems that go into fsck convulsions when you pull the cord.
I've also heard at least two stories about entire RAID sets that were lost because the ``hardware RAID'' PCI controller card went nuts from ``a bad SODIMM'' or something and wrote garbage over the disk, or worked fine until a disk failed and then couldn't recover the set, or the guy doesn't know exactly what happened but after being down for two days struggling with it had to throw out the card and go to backup tapes. Yet everyone still tells me how hardware RAID is the only serious RAID, and ``software RAID'' is junk, as if all RAID weren't done in software. It's amazing how successful marketing is at controlling people's minds even when opposed by glaringly contradictory facts about painful disasters.
so, I favour a plain old 'tar' or 'rsync' backup onto a second separate disk, or an AFS mirror, or something like that. It would be good to think about what happens if you accidentally delete a bunch of stuff, too. Even just being able to roll back 24 hours could be worth a lot, if you have old college photos and emails and source code that you'd like to keep into your retirement, and RAID won't give you that. Running rsync at 4am from one local disk to another kindasorta will, though of course I'd rather have Netapp snapshots.
Netapp is allowed to do RAID. Netapp's filesystem fsck's in O(n). I believe their marketing. It's just yours and that of all the products you like that I distrust.
POSTED: ye who spits on this square of the sidewalk consigns himself and all his descendants to work as a slave to Scrooge McDuck for all eternity!
Citizens must be free to negotiate contracts without government interference, huh? There's this simple-minded faith-based belief in an ordered, comprehensible universe behind this Libertarian scum. I don't buy it. I like this a bit more:
free markets are always constructed by political regimes. I think this is true in the nineteenth century hayday of the ideology of free market, and that this is equally true in our contemporary neoliberal phase,
-- Michael Hardt
It's not ``government invasion into the ability of `private parties' to `consentually' form arbitrary contracts between one another.'' The government is the one enforcing these contracts. Contract law is not an empty field. There are already restrictions on what kinds of contracts are enforceable. I think the set of restrictions should change. Drastically.
In what way, you might ask. I'd like a contract regime that promotes something like a vibrant free market, which, as Hardt said, is ``constructed'' by government, to whatever extent it exists, today just as it was in the ninteenth century. In the regime we have now, there is no contract shopping---unless the transaction involves thousands of dollars (and sometimes even when it does), people just blindly agree to everything. The shoppers can't understand the contracts. They can't get easy access to the contracts to compare them, because they're always brought out at the last moment, right before money changes hands, or whenever possible even after money changes hands (oh, sure, we'll give you a refund). Once they can set up the first two factors and trap the majority of customers into blindly agreeing, sellers all begin to offer the same set of exploitive terms---the price premium you can charge for having a customer-favorable contract is nearly zero, not because it's worth zero but because it's unshoppable. It's not a vibrant market with plenty of contracts to choose from, as-is. Even contract expert legal geeks just agree to whatever PayPal wants, because the alternative is overwhelmingly expensive. The current consumer contract landscape is one of rigid cartels in which anyone with market share becomes a de facto legislator.
Most of these agreements should be simply wiped out, and the seller should have to contend with whatever legal exposure he has from ``I sell something, and you buy it.'' Is this exposure too great? Alter the law, for everyone, to reduce the seller's exposure, in a way we all find reasonable. I understand that some businesses selling inexpensive things have too much legal exposure without all their click-through fine-print. The Europeans who come over here visiting actually seem more afraid they're going to get sued than they are afraid of getting mugged. I'm not exaggerating! In that sense, they share the business's perspective.
But the right way to relieve this exposure and keep open these markets for cheap services is to adjust the law for everyone, through democratic legislative process, not allow companies to form private legislative branches and gather unto their bosoms hoardes of opt-in citizens without, let's be honest, meaningful free market process, much less democratic process.
If I want to feel like I'm living in a democracy, that means I want a stake in making the rules that govern my daily life. Having to be careful what I send to any @hotmail.com address doesn't give me that feeling.
The CVSup FAQ says it more artfully than I do, but just in case we read it differently let me spell out my interpretation in a literal way. I think the CVSup guys are right in pointing out that this objection comes mostly from just not wanting to deal with any language but C. A performant language with rich libraries is hard to achieve and maintain, whether it be C or any other language. That's however not a good reason to preempt all work on any language except C. Such a position is a dangerous pruning of human inquiry.
For the performance objections, you need to see the Java, Language of Tomorrow rant. ``Interpreted'' is an arguably useful label, but it has no fundamental basis---there are only fast language environments and slow ones.
It's also outdated, mostly, which is why I have to pull it from archive.org. Third time's the charm, apparently. Linux's third-generation SCSI generic framework is finally ioctl()-based, just like the one on BSD and Solaris has been since the beginning.
Anyway, his old opinion is backed up by more experience than most. On the other hand, doing all that rework ten times over for Unixes that he doesn't even use himself must get annoying. He has this pet <bus>,<id>,<lun> device addressing scheme that he seems to think is some kind of ueberspecifier, transcending that pathetic ability to refer to devices with filenames which Unix already includes. Thanks to his invention, we can use easy-to-remember triple-sets of numbers instead of linguistic mnemonics! What's next, virtual dip switches? He tries to emulate this pet scheme on all platforms. Personally, I think it's colossally stupid. First, how does it make my life easier that I must remember to use 'cdrecord dev=0,6,0' and 'eject cd0'? I now have two specifiers to remember instead of one. This is almost as bad as Linux's ``generic driver'' practice of attaching the same SCSI device with multiple drivers. Second, his ueberscheme is not entertainingly clever enough that I should put up with it: how do I know which is the ``first'' SCSI bus, and which is the ``zeroth''? Bus-numbering is not well-defined, particularly when some of the SCSI busses are not SCSI, but rather are IDE, or dynamically-attached 'umass(4)' USB devices. If he dithers over this, toss him a fibre channel WWN, and I'm sure he'll feel obligated to include that 16-digit hex number in his ueberspecifier, because it feels so official to him.
In his critique, he equates the brokenness of Linux, which corrupts diagnostic messages from devices and thus makes it impossible to do any 'cdrecord' development work on Linux, with the brokenness of BSD, which interferes with his pet device-addressing scheme.
The ethics of BSD require that, when someone says, ``how can I do <foolish thing>,'' we must respond with ``you can't'' as often as is practically convenient. BSD forces Joerg to abandon his pet naming scheme and use the same damned device names as the rest of the system. This is a feature, not a bug.
So, yes, I'm criticizing the criticizer. Read Joerg's article, and listen to him when he criticizes Linux, but ignore his criticisms of BSD. K? No, seriously!