Shared posts

14 Dec 16:34

DateTime

ChaosAdventurer

Depends on your needed/desired level of precision and accuracy. Much of the time, I can get a number good enough for management, but the variabilities still readily lead to imaginary numbers and degradation of sanity.

It's not just time zones and leap seconds. SI seconds on Earth are slower because of relativity, so there are time standards for space stuff (TCB, TGC) that use faster SI seconds than UTC/Unix time. T2 - T1 = [God doesn't know and the Devil isn't telling.]
22 Mar 18:42

Here’s why Putin won’t use nukes in Ukraine — Pass it on.

by Robert X. Cringely

President Putin of Russia has been talking a lot lately about his forces using nuclear weapons — presumably tactical nuclear weapons — in the war with Ukraine. It’s an easy threat to make but a difficult one to follow-through for reasons I’ll explain here in some detail. I’m not saying Mr Putin won’t order nuclear strikes. He might. Dictators do such things from time to time. But if Mr Putin does push that button, I’d estimate there is perhaps a 20- percent chance that nukes will be actually launched and a 100 percent chance that Mr. Putin will end that day with a bullet in his brain.

Given that I don’t think Mr. Putin really wants a bullet in his brain, my goal here is to lay out facts and probabilities to show how nuking Ukraine would be a huge mistake for Putin and Russia.  With the facts thus presented and presumably repeated by many people in many venues, that information will quickly reach everyone in positions to make such a nuclear war NOT happen. But without essays like this one, that education and intervention is much less likely. So I am writing this as a public service. Pass it on.

What do I know? I worked as an investigator for the Presidential Commission on the Accident at Three Mile Island in 1979. Part of my portfolio then was to study the Federal Emergency Management Agency’s response to that nuclear accident, which was pathetic.

TMI was FEMA’s first big crisis as FEMA. Most of the agency had been called Civil Defense until a short time before TMI. Their idea of nuclear safety (remember the Nuclear Regulatory Commission, not FEMA, actually regulates the reactors) had been tracking clouds of predicted fallout from Russian nuclear attacks driven by prevailing winds and coming up with plans to move civilians out of the way of those clouds. In the northeast USA around Three Mile Island, the old Civil Defense plans called for moving 75 million people in 72 hours — an impossible task, then or now.

Think about that task for a moment. In Ukraine so far it has taken three weeks — not three days — to move THREE million people. And this is before any nukes have dropped.

The simple lesson here is that nobody is going to have time to move out of the way of tactical nukes.

If you are wondering what the damage of such a limited nuclear war would look like, the Chernobyl nuclear accident from 1986 provides a pretty good example, since that disaster site lies between Kiev and Russia and Belarus.  Above you’ll find a map showing the Cesium 137 fallout from Chernobyl. If you want to know what bombing Kiev would do, just move the Chernobyl spot down and a little to the right and see where the blotches fall.

But tactical nukes aren’t a nuclear accident, you say, the map would be different. Yes, it would be BIGGER. Chernobyl melted DOWN while bomb and missile and artillery fallout bounce UP into the atmosphere and spread much farther.

Most of the fallout of a Kiev attack, in fact, would land in Russia. The cities of Bryansk (427,000 population), Kaluuga (338,000), Kursk (409,000), Orel (324,000), and Tula (468,000) would all be hit, not by weapon strikes, but by fallout. That’s just under two million people exposed in those five cities, not counting folks in the countryside between.

Two million is approximately the population of Kiev, or was before a lot of those people fled west.

We can estimate civilian deaths from radiation, from heat, from atmospheric over-pressure, but I’ll just jump here to the bottom line that about one million Russians would die from such a nuclear attack, both directly and through greatly increased cancer deaths in later years.

So a nuclear attack on Kiev would kill more Russians than Ukrainians.

Moving the attack to, say, Odessa would kill more Ukrainians, but it would completely destroy the Ukrainian agricultural economy for a century and still kill tens of thousands of Russians.

Now consider the supposed military justification for such an attack. Ukraine clams to have killed 15,000 Russian soldiers while Russia says its losses are more like 1500. I don’t care which number is correct because Putin killing one million of his own people to avenge 1500 or 15,000 deaths makes no sense.

He’s just whining.

The Russians are not evacuating those five cities, so Putin is either willing to lose a million Russian citizens or he has no real intention of launching nukes.

I’m guessing it’s a bluff.

And what if it isn’t a bluff? What if Putin actually goes ahead and pushes that button saying — as bullies are wont to do — “Look what you made me do.”

IF Putin pushes that button, it will set in motion a series of very quick events as half a dozen nations take action not against Russia, but against Putin personally. Navy SEALS and Chinese commandos will fall from the sky, but Putin will already be dead, killed by his own people, whether any nukes are actually launched or not.

Look what he made them do.

The post Here’s why Putin won’t use nukes in Ukraine — Pass it on. first appeared on I, Cringely.






Digital Branding
Web Design Marketing
07 Feb 19:32

Backpack Decisions

"This one is perfect in every way, except that for some reason it's woven from a tungsten mesh, so it weighs 85 pounds and I'll need to carry it around on a hand cart." "That seems like a bad--" "BUT IT HAS THE PERFECT POCKET ARRANGEMENT!"
07 Nov 21:32

I'm With Her

ChaosAdventurer

I didn't realize Americans voted at the Hospital (as the sign indicates, the hospital is over there to the right)

We can do this.
18 Jun 13:01

Controlling JavaScript Malware Before it Runs, (Sat, Jun 18th)

ChaosAdventurer

Good trick to block some malware, and I am trying to set it to unread but ol
oldreader doesn't want to do that on this droid

Weve posted a number of stories lately about various exploit kits and the malware they post. What ...

11 Feb 03:44

Renewables: The Next Fracking?

by John Michael Greer
I'd meant this week’s Archdruid Report post to return to Retrotopia, my quirky narrative exploration of ways in which going backward might actually be a step forward, and next week’s post to turn a critical eye on a common but dysfunctional habit of thinking that explains an astonishing number of the avoidable disasters of contemporary life, from anthropogenic climate change all the way to Hillary Clinton’s presidential campaign.

Still, those entertaining topics will have to wait, because something else requires a bit of immediate attention. In my new year’s predictions a little over a month ago, as my regular readers will recall, I suggested that photovoltaic solar energy would be the focus of the next big energy bubble. The first signs of that process have now begun to surface in a big way, and the sign I have in mind—the same marker that provided the first warning of previous energy bubbles—is a shift in the rhetoric surrounding renewable energy sources.

Broadly speaking, there are two groups of people who talk about renewable energy these days. The first group consists of those people who believe that of course sun and wind can replace fossil fuels and enable modern industrial society to keep on going into the far future. The second group consists of people who actually live with renewable energy on a daily basis. It’s been my repeated experience for years now that people belong to one of these groups or the other, but not to both.

As a general rule, in fact, the less direct experience a given person has living with solar and wind power, the more likely that person is to buy into the sort of green cornucopianism that insists that sun, wind, and other renewable resources can provide everyone on the planet with a middle class American lifestyle. Conversely, those people who have the most direct knowledge of the strengths and limitations of renewable energy—those, for example, who live in homes powered by sunlight and wind, without a fossil fuel-powered grid to cover up the intermittency problems—generally have no time for the claims of green cornucopianism, and are the first to point out that relying on renewable energy means giving up a great many extravagant habits that most people in today’s industrial societies consider normal.

Debates between members of these two groups have enlivened quite a few comment pages here on The Archdruid Report. Of late, though—more specifically, since the COP-21 summit last December came out with yet another round of toothless posturing masquerading as a climate agreement—the language used by the first of the two groups has taken on a new and unsettling tone.

Climate activist Naomi Oreskes helped launch that new tone with a diatribe in the mass media insisting that questioning whether renewable energy sources can power industrial society amounts to “a new form of climate denialism.” The same sort of rhetoric has begun to percolate all through the greenward end of things: an increasingly angry insistence that renewable energy sources are by definition the planet’s only hope, that of course the necessary buildout can be accomplished fast enough and on a large enough scale to matter, and that no one ought to be allowed to question these articles of faith.

There are plenty of points worth making about what this sort of rhetoric implies about the current state of the green movement, and I’ll get to some of those  shortly, but the issue that comes first to mind—typically enough for this blog—is a historical one: we’ve been here before.

When this blog first got going, back in 2006, the energy resource that was sure to save industrial civilization from the consequences of its own bad decisions was biofuels. Those of my readers who were paying attention to the peak oil scene in those days will remember the grandiose and constantly reiterated pronouncements about the oceans of ethanol from American corn and the torrents of biodiesel from algae that were going to sweep away the petroleum age and replace fossil fuels with all the cheap, abundant, carbon-neutral liquid fuel anyone could want. Those who raised annoying questions—and yes, I was one of them—got reactions that swung across a narrow spectrum from patronizing putdowns to furious denunciation.

As it turned out, of course, the critics were right and the people who insisted that biofuels were going to replace petroleum and other fossil fuels were dead wrong. There were at least two problems, and both of them could have been identified—and in fact were identified—well in advance, by that minority who were willing to take a close look at the underlying data.

The first problem was that the numbers simply didn’t work out. It so happens, for example, that if you grow corn using standard American agricultural methods, and convert that corn into ethanol using state of the art industrial fermenters and the like, the amount of energy you have to put into that whole process is more than you get by burning the resulting ethanol. Equally, it so happens that if you were to put every square inch of arable farmland in the world into biofuel crops, leaving none for such trivial uses as feeding the seven billion human beings on this planet, you still wouldn’t get enough biofuel to replace the world’s annual consumption of transportation fuels. Neither of these points were hard to figure out, and the second one was well known in the appropriate tech scene of the 1970s—you’ll find it, for example, in the pages of William Catton’s must-read book Overshoot—but somehow the proponents of ethanol and biodiesel missed it.

The second problem was a little more complex, but not enough so to make it impossible to figure out in advance. This was that the process of biofuel production and consumption had impacts of its own. Divert a significant fraction of the world’s food supply into the fuel tanks of people in a handful of rich countries—and of course this is what all that rhetoric about fueling the world amounted to in practice—and the resulting spikes in food prices had disastrous impacts across the Third World, triggering riots and quite a number of countries and outright revolutions in more than one.

Meanwhile rain forests in southeast Asia got clearcut so that palm oil plantations could supply the upper middle classes of Europe and America with supposedly sustainable biodiesel. It could have gotten much worse, except that the underlying economics were so bad that not that many years into the biofuels boom, companies started going broke at such a rate that banks stopped lending money for biofuel projects; some of the most highly ballyhooed algal biodiesel projects turned out to be, in effect, pond scum ponzi schemes; and except for those enterprises that managed to get themselves a cozy spot as taxpayer-supported subsidy dumpsters, the biofuel boom went away.

It was promptly replaced by another energy resource that was sure to save industrial civilization. Yes, that would be hydrofracturing of oil- and gas-bearing shales, or to give it its popular moniker, fracking. For quite a while there, you couldn’t click through to an energy-related website without being assailed with any number of grandiose diatribes glorifying fracking as a revolutionary new technology that, once it was applied to vast, newly discovered shale fields all over North America, was going to usher in a new era of US energy independence. Remember the phrase “Saudi America”? I certainly do.

Here again, there were two little problems with these claims, and the first was that once again the numbers didn’t work out. Fracking wasn’t a new technological breakthrough—it’s been used on oil fields since the 1940s—and the “newly discovered” oil fields in North Dakota and elsewhere were nothing of the kind; they were found decades ago and the amount of oil in them, which was well known to petroleum geologists, did not justify the wildly overinflated claims made for them. There were plenty of other difficulties with the so-called “fracking revolution,” including the same net energy issue that ultimately doomed the “biodiesel revolution,” but we can leave those for now, and go on to the second little problem with fracking. 

This was the awkward fact that the fracking industry, like the biodiesel industry, had impacts of its own that weren’t limited to the torrents of new energy it was supposed to provide. All across the more heavily fracked parts of the United States, homeowners discovered that their tap water was so full of methane that they could ignite it with a match, while some had to deal with the rather more troubling consequences of earthquake swarms and miles-long trains of fracked fuels rolling across America’s poorly maintained railroad network. Then there was the methane leakage into the atmosphere—I don’t know that anybody’s been able to quantify that, but I suspect it’s had more than a little to do with the abrupt spike in global temperatures and extreme weather events over the last decade.

Things might have gotten much worse, except here again the underlying economics of fracking were so bad that not that many years into the fracking boom, companies have started going broke at such a rate that banks are cutting back sharply on lending for fracking projects. As I write this, rumors are flying in the petroleum industry that Chesapeake Petroleum, the biggest of the early players in the US fracking scene, is on the brink of declaring bankruptcy, and quite a few very large banks that lent recklessly to prop up the fracking boom are loudly proclaiming that everything is just fine while their stock values plunge in panic selling and the rates other banks charge them for overnight loans spike upwards.

Unless some enterprising fracking promoter figures out how to elbow his way to the government feed trough, it’s pretty much a given that fracking will shortly turn back into what it was before the current boom: one of several humdrum technologies used to scrape a little extra oil out from mostly depleted oil fields. That, in turn, leaves the field clear for the next overblown “energy revolution” to be rolled out—and my working ghess is that the focus of this upcoming round of energy hype will be renewable energy resources: specifically, attempts to power the electrical grid with sun and wind. 

In a way, that’s convenient, because we don’t have to wonder whether the two little problems with biofuels and fracking also apply to this application of solar and wind power. That’s already been settled; the research was done quite a while ago, and the answer is yes.

To begin with, the numbers are just as problematic for solar and wind power as they were for biofuels and fracking. Examples abound: real world experience with large-scale solar electrical generation systems, for example, show dismal net energy returns; the calculations of how much energy can be extracted from wind that have been used to prop up windpower are up to two orders of magnitude too high; more generally, those researchers who have taken the time to crunch the numbers—I’m thinking here especially, though not only, of Tom Murphy’s excellent site Do The Math—have shown over and over again that for reasons rooted in the hardest of hard physics, renewable energy as a source of grid power can’t live up to the sweeping promises made on its behalf.

Equally, renewables are by no means as environmentally benign as their more enthusiastic promoters claim. It’s true that they don’t dump as much carbon dioxide into the atmosphere as burning fossil fuels do—and my more perceptive readers may already have noted, by the way, the extent to which talk about the very broad range of environmental blowbacks from modern industrial technologies has been supplanted by a much narrower focus on greenhouse gas-induced anthropogenic global warming, as though this is the only issue that matters—but the technologies needed to turn sun and wind into grid electricity involve very large volumes of rare metals, solvents, plastics, and other industrial products that have substantial carbon footprints of their own.

And of course there are other problems of the same kind, some of which are already painfully clear. A number of those rare metals are sourced from open-pit mines in the Third World worked by slave labor; the manufacture of most solvents and plastics involves the generation of a great deal of toxic waste, most of which inevitably finds its way into the biosphere; wind turbines are already racking up an impressive death toll among birds and bats—well, I could go on. Nearly all of modern industrial society’s complex technologies are ecocidal to one fairly significant degree or another, and the fact that a few of them extract energy from sunlight or wind doesn’t keep them from having a galaxy of nasty indirect environmental costs.

Thus the approaching boom in renewable energy will inevitably bring with it a rising tide of ghastly news stories, as corners get cut and protections overwhelmed by whatever degree of massive buildout gets funded before the dismal economics of renewable energy finally take their inevitable toll. To judge by what’s happened in the past, I expect to see plenty of people who claim to be concerned about the environment angrily dismissing any suggestion that the renewable energy industry has anything to do with, say, soaring cancer rates around solar panel manufacturing plants, or whatever other form the inevitable ecological blowback takes. The all-or-nothing logic of George Orwell’s invented language Newspeak is astonishingly common these days: that which is good (because it doesn’t burn fossil fuels) can’t possibly be ungood (because it isn’t economically viable and also has environmental problems of its own), and to doubt the universal goodness of what’s doubleplusgood—why, that’s thoughtcrime...

Things might get very ugly indeed, all things considered, except that the underlying economics of renewable energy as a source of grid electricity aren’t noticeably better than those of fracking or corn ethanol. Six to ten years down the road, as a result, the bankruptcies and defaults will begin, banks will start backing away from the formerly booming renewables industry, and the whole thing will come crashing down, the way ethanol did and fracking is doing right now. That will clear the way, in turn, for whatever the next energy boom will be—my guess is that it’ll be nuclear power, though that’s such a spectacular money-loser that any future attempt to slap shock paddles on the comatose body of the nuclear power industry may not get far.

It probably needs to be said at this point that one blog post by an archdruid isn’t going to do anything to derail the trajectory just sketched out. Ten thousand blog posts by Gaia herself, cosigned by the Pope, the Dalai Lama, and Captain Planet and the Planeteers probably wouldn’t do the trick either. I confidently expect this post to be denounced furiously straight across the green blogosphere over the next couple of weeks, and at intervals thereafter; a few years from now, when dozens of hot new renewable-energy startups are sucking up million-dollar investments from venture capitalists and planning their initial IPOs, such few references as this and similar posts field will be dripping with patronizing contempt; then, when reality sets in, the defaults begin and the banks start backing away, nobody will want to talk about this essay at all.

It probably also needs to be pointed out that I’m actually very much in favor of renewable energy technologies, and have discussed their importance repeatedly on this blog. The question I’ve been trying to raise, here and elsewhere, isn’t whether or not sun and wind are useful power sources; the question is whether it’s possible to power industrial civilization with them, and the answer is no.

That doesn’t mean, in turn, that we’ll just keep powering industrial civilization with fossil fuels, or nuclear power, or what have you. Fossil fuels are running short—as oilmen like to say, depletion never sleeps—and nuclear power is a hopelessly uneconomical white-elephant technology that has never been viable anywhere in the world without massive ongoing government subsidies. Other options? They’ve all been tried, and they don’t work either.

The point that nearly everyone in the debate is trying to evade is that the collection of extravagant energy-wasting habits that pass for a normal middle class lifestyle these days is, in James Howard Kunstler’s useful phrase, an arrangement without a future. Those habits only became possible in the first place because our species broke into the planet’s supply of stored carbon and burnt through half a billion years of fossil sunlight in a wild three-century-long joyride. Now the needle on the gas gauge is moving inexorably toward that threatening letter E, and the joyride is over. It really is as simple as that.

Thus the conversation that needs to happen now isn’t about how to keep power flowing to the grid; it’s about how to reduce our energy consumption so that we can get by without grid power, using local microgrids and home-generated power to meet sharply reduced needs.We don’t need more energy; we need much, much less, and that implies in turn that we—meaning here especially the five per cent of our species who live within the borders of the United States, who use so disproportionately large a fraction of the planet’s energy and resources, and who produce a comparably huge fraction of the carbon dioxide that’s driving global warming—need to retool our lives and our lifestyles to get by with the sort of energy consumption that most other human beings consider normal.

Unfortunately that’s not a conversation that most people in America are willing to have these days. The point that’s being ignored here, though, is that if something’s unsustainable, sooner or later it will not be sustained. We can—each of us, individually—let go of the absurd extravagances of the industrial age deliberately, while there’s still time to do it with some measure of grace, or we can wait until they’re pried from our cold and stiffening fingers, but one way or another, we’re going to let go of them. The question is simply how many excuses for delay will be trotted out, and how many of the remaining opportunities for constructive change will go whistling down the wind, before that happens.
14 Sep 20:56

“Kyle Dine & Friends” Video Premiere – Sept. 20 & 21

by kyledine

Online-Premiere

Mark your calendars! This upcoming Sunday and Monday you can watch the video debut for free online and chat with Kyle Dine too!

It will be streaming from www.foodallergyvideo.com/premiere.html.


28 Jan 17:03

Your Phone Interface is a Legacy Train Wreck

If you were to design a smartphone interface from scratch, without any legacy issues, would it look like a bunch of app icons sitting on a home screen?

No. Because that would be stupid. Would you want your users to be hunting around for the right app every time they want to do simple things? That ruins flow. And it unnecessarily taxes your brain by making you shift your mental model each time you switch apps. You’re always thinking Is this the one with the swiping left or the one that scrolls down

image

There is a lot of background processing in your brain just to move from app to app. I sometimes skip simple tasks on my phone because I can’t go app-diving one more time or my head will explode. My brain seems to have a finite capacity within a given day for “hunting for the right app.”

When you have my kind of job, losing flow is devastating. I’m a fan of new technology, but objectively speaking, the smartphone is the biggest threat to creativity since communism. My phone interrupts me all day long. And if I have a new idea that I want to jot down before the next interruption, it is nearly impossible because of the app-hunting legacy model of phones. I usually forget what I was thinking because I get interrupted or my mind moves on before I even decide what app to use for my note.This is all worsened by the fact that modern life is making my attention span shrink to nothing.

So what would a proper smartphone interface look like?

It would be a blank screen. Like this, except with a keyboard at the bottom. 

image

Let’s say, for example, you start typing (or speaking) on the blank screen…

"Kenn…"

Your smartphone starts guessing that you are either writing an email or a text message because “Kenn” is almost certainly short for Kenny and you have been communicating with someone by that name. A hovering menu appears at the top right while you continue, offering you the chance to choose your app (email or text) whenever you please. You can do it now or wait until you stop typing, to preserve flow.

Halfway through your typing, the OS understands that this is probably an email message because your recent messages to Kenny were all email. The OS starts wrapping an email interface around your message as you type. It also automatically attaches your history of email back and forth to your new message, at the bottom.

The idea here is that you start working first, to maintain flow, and only later do you select the app. And by the time you need to select the app, the OS has done a 95% accurate job of doing it for you, so you simply proceed without ever actively selecting the app.

Does this work for all sorts of apps? I haven’t thought through every possibility, but I think so. Let’s see some more examples and I’ll tell you how the OS would guess the right app and auto-surround your work with the most relevant options.

If you type…             Then….

———————-              ——————————————————-

wea…                        Local weather info pops up

wel…                         If Wells Fargo is your bank, the sign-in page

                                 appears

eat                            A restaurant search app or search engine pops up

saf…                          Safari browser pops up

Goo…                        Google search box pops up

Stev…                        Either text or email (hover menu choice)

Ala…                           Open alarm clock

tw…                             Open twitter

My bagel is…               Hover menu for Facebook, Twitter, 

Pick up…                      Reminder app opens for your to-do list

Thurs…                        Your calendar pops up to show next Thursday

Stop saying I am reinventing the DOS operating system. DOS was dumb. The smartphone can see your work as part of a larger context. It will know what you need based on the situation. 

Smartphone users are experienced at typing because we do so much texting. We do it quickly and effortlessly. So my suggested blank-screen interface goes with our strengths instead of making you play a game of Where’s Waldo to find the right app before every task.

How did we get the app-centric terrible interfaces of today? I think it goes back to the dawn of personal computers. In those days it was no big deal to first pick the software (Word or Excel) and then spend a few hours within an app doing one task. There was no mental tax involved in switching apps because you only ever used one or two. So the app-first model became normal.

Fast-forward to the original Apple smartphone. The business model required an open market for software providers, and they each got their own little branding, navigation strategies, and real estate on your screen. It works great until you have fifty apps. The app-first interface is a total failure at this point. It works, but the cost is so high I am having legitimate thoughts about abandoning my smartphone for good. (I won’t pull the trigger, but why am I even considering it?)

If you don’t like the blank screen with a keyboard interface, here’s another idea that is better than current phones: Use faces for the interface.

By that I mean my home screen icons should be the faces of people I deal with most often. If the icon with Bob’s face shows a little “2” on it, I know I can click to see two messages from Bob, or perhaps I have one message and one meeting today with Bob, or one task to do for Bob. 

The main insight here is that humans reflexively arrange their tasks by the human that benefits from it. Sometimes the human is yourself, so your face is on the front page too. I doubt you can think of a task that does not relate to a specific face in your life.

And finally, a word to current makers of smartphone operating systems. If my OS interrupts me to ask about updating software, you failed. Please keep working on that until you get it right. Make your machine conform to my flow, not the other way around.

12 Dec 16:21

Documents

ChaosAdventurer

the alt tag is an impossible file name as well, too long for Windows or Linux to manage. I do hit this sort of thing at clients' sites all the time.

Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Copy of Untitled.doc
12 Sep 20:30

Han and Cheese – DORK TOWER 12.09.14

by John Kovalic

Super Happy Solo Fun Hour

HEY, since I have your attention, check out MY INSANE 60-MILE BIKE RIDE, and take a gander at all the cool swag you can get simply for helping a wonderful charity! Including new limericks by Neil Gaiman, Pat Rothfuss that the Doubleclicks will also set to music,  and lots more from more bestselling authors than I can shake a deadline at!

27 Aug 15:37

A bird on the wall …

by landon
ChaosAdventurer

An app to do this would be nice, but just wouldn't have the same impact.

It was mid-afternoon and someone was poking me. “Hey, are you awake?” they whispered. I blinked and raised my chin.

“Huh?”

“Ralph wants to know what database we’ve chosen. What database did we choose?”

“Grughnk. Guhbbah.” I’d nearly fallen asleep in the marathon marketing meeting again. “Um, we decided on, um, Squorical.”

“What?”

“Orasqueakle Server. Yeah, that was it.”

“Oh God.”

Many eyes were on me. Many of the eyes had suits and ties attached to them, and they wore expectant looks. I put down the notebook I’d been doodling satanic cartoons in and lurched from my chair. “Sorry, I’ve got to –” I said, and waved generally toward the door. (“Oraskwackle? Really?”)

Ralph the Marketing Guy said, “Great idea. Ten minute bio-break everyone!” He was chipper and ready for another hour at least. I wanted to throw him off our balcony.

We’d been in that meeting room for nearly two hours, discussing nonsense and useless stuff (along the lines of Douglas Adams: “Okay, if you’re so smart, YOU tell us what color the database tables should be!”) and I needed to be doing designing and coding rather than listening to things that made my shit itch. It was the third multi-hour marathon meeting with Marketing that week, and I was nearly at the end of my rope. One of the of the topics under discussion was why engineering was behind schedule.

—-

Our little group of programmers often walked to lunch. I’m not sure where we went the next day; it might have been to the borderline not-very-good chinese place down the hill that had wonderful mis-spellings on its menu (my favorite dish was Prok Noddles), or it might have been the Insanely Hot chinese place that was in an old IHOP A-frame building (they had a great spicy soup that would spray a perfect Gaussian distribution of hot pepper oil on the paper tablecloth as you ate from the bowl). All I remember for sure is that on the return walk we passed the shops on the street as we always did, but for some reason I paused just outside the clock shop, which I’d never really paid attention to before.

A spark of evil flared in my head. “You guys go on, I’m going in here.” I got a couple of strange looks, but nobody followed me in. I spent about a hundred dollars.

—-

The next morning the marathon meeting started on time, but there was a new attendee. I had nailed it to the wall. It said “Cuckoo!” ten times.

“What is that?” one of the marketeers asked. [This question pretty much defines most marketing for me, by the way].

“Just a reminder,” I said with slight smile and a tone that implied it was no big deal. (I’d talked with our CEO about it earlier and he had said okay, but I think I was going to do it anyway).

The rest of the usual attendees filtered in and the meeting started. The clock said “Bong!” at fifteen minutes past the hour, which got a couple of raised eyebrows, and it bonged again on the half hour and a quarter-to. At 10:59 it made a “brrrr-zup!” sound, as if it was getting ready for some physical effort, then at 11:00 the bird popped out, interrupting a PowerPoint presentation that I’ll never be able to recall, and went “CUCKOO! CUCKOO!” about a million times. Well, only eleven, but it certainly felt like a long, long time. The guy who had been talking had trouble remembering what he’d been saying. Everyone else had been expecting the display, but the speaker, one of our serial meeting-stretchers, had been lost in his own blather.

The meeting broke early that day.

The next day, too.

The hours we spent in that room no longer passed anonymously. The clock’s smaller noises, as it prepared to say “Bong”, and its more dramatic preparations for each hour’s Big Show were now part of the agenda. We knew time was passing. The meetings did become shorter. We had an interesting time explaining the clock to customers (for this was a small company with only a couple of meeting rooms), but in general people from outside understood, and often laughed in approval.

The clock was a relatively cheap model that had to be wound every day. I usually got in to work pretty early and wound it, but occasionally I forgot to, or worked remotely and didn’t get to the office at all. Somehow the clock never ran down. It turned out that our office manager was winding it, and occasionally our CEO would wind it, too.

I knew I’d gotten the message across when one of the marketing guys looked up at the bird (which had just announced 3PM) and said, “I hate that thing.” And I smiled, and the meeting ended soon after, and I got up and left the room and went to my keyboard and wrote some more code.

[I left that start-up a few months later. I still have the clock.]

13 Mar 20:17

There are times I do things of dubious value

by Christopher Wright
ChaosAdventurer

The obsolescence of eReaders, Welcome their eventual replacement, the pReader

... and when I do, I share them with you:

Link for those of you who don't want to deal with iframes: https://www.youtube.com/watch?v=DP_1T64XQXA

07 Sep 01:00

Learning NVP, Part 5: Creating a Logical Network

by slowe

I’m back with more NVP goodness; this time, I’ll be walking you through the process of creating a logical network and attaching VMs to that logical network. This work builds on the stuff that has come before it in this series:

  • In part 1, I introduced you to the high-level architecture of NVP.
  • In part 2, I walked you through setting up a cluster of NVP controllers.
  • In part 3, I showed you how to install and configure NVP Manager.
  • In part 4, I discussed how to add hypervisors (KVM hosts, in this case) to your NVP environment.

Just a quick reminder in case you’ve forgotten: although VMware recently introduced VMware NSX at VMworld 2013, the architecture of NSX when used in a multi-hypervisor environment is very similar to what you can see today in NVP. (In pure vSphere environments, the NSX architecture is a bit different.) As a result, time spent with NVP now will pay off later when NSX becomes available. Might as well be a bit proactive, right?

At the end of part 4, I mentioned that I was going to revisit a few concepts before proceeding to the logical network piece, but after deeper consideration I’ve decided to proceed with creating a logical network. I still believe there will be a time when I need to stop and revisit some concepts, but it didn’t feel right just yet. Soon, I think.

Before I get into the details on how to create a logical network and attach VMs, I want to first talk about my assumptions regarding your environment.

Assumptions

This walk-through assumes that you have an NVP controller cluster up and running, an instance of NVP Manager connected to that cluster, at least 2 hypervisors installed and added to NVP, and at least 1 VM running on each hypervisor. I further assume that your environment is using KVM and libvirt.

Pursuant to these assumptions, my environment is running KVM on Ubuntu 12.04.2, with libvirt 1.0.2 installed from the Ubuntu Cloud Archive. I have the NVP controller cluster up and running, and an instance of NVP Manager connected to that cluster. I also have an NVP Gateway and an NVP Service Node, two additional components that I haven’t yet discussed. I’ll cover them in a near-future post.

Additionally, to make it easier for myself, I’ve created a libvirt network for the Open vSwitch (OVS) integration bridge, as outlined here (and an update here). This allows me to simply point virsh at the libvirt network, and the guest domain will attach itself to the integration bridge.

Revisiting Transport Zones

I showed you how to create a transport zone in part 4; it was necessary to have a transport zone present in order to add a hypervisor to NVP. But what is a transport zone? I didn’t explain it there, so let me do that now.

NVP uses the idea of transport zones to provide connectivity models based on the topology of the underlying network. For example, you might have hypervisors that connect to one network for management traffic, but use an entirely different network for VM traffic. The combination of a transport zone plus the transport connectors tells NVP how to form tunnels between hypervisors for the purposes of providing logical connectivity.

For example, consider this graphic:

The transport zones (TZ–01 and TZ–02) help NVP understand which interfaces on the hypervisors can communicate with which other interfaces on other hypervisors for the purposes of establishing overlay tunnels. These separate transport zones could be different trust zones, or just reflect the realities of connectivity via the underlying physical network.

Now that I’ve explained transport zones in a bit more detail, hopefully their role in adding hypervisors makes a bit more sense now. You’ll also need a transport zone already created in order to create a logical switch, which is what I’ll show you next.

Creating the Logical Switch

Before I get started taking you through this process, I’d like to point out that this process is going to seem laborious. When you’re operating outside of a CMP such as CloudStack or OpenStack, using NVP will require you to do things manually that you might not have expected. So, keep in mind that NVP was designed to be integrated into a CMP, and what you’re seeing here is what it looks like without a CMP. Cool?

The first step is creating the logical switch. To do that, you’ll log into NVP Manager, which will dump you (by default) into the Dashboard. From there, in the Summary of Logical Components section, you’ll click the Add button to add a switch. To create a logical switch, there are four sections in the NVP Manager UI where you’ll need to supply various pieces of information:

  1. First, you’ll need to provide a display name for the new logical switch. Optionally, you can also specify any tags you’d like to assign to the new logical switch.
  2. Next, you’ll need to decide whether to use port isolation (sort of like PVLANs; I’ll come back to these later) and how you want to handle packet replication (for BUM traffic). For now, leave port isolation unchecked and (since I haven’t shown you how to set up a service node) leave packet replication set to Source Nodes.
  3. Third, you’ll need to select the transport zone to which this logical switch should be bound. As I described earlier, transport zones (along with connectors) help define connectivity between various NVP components.
  4. Finally, you’ll select the logical router, if any, to which this switch should be connected. We won’t be using a logical router here, so just leave that blank.

Once the logical switch is created, the next step is to add logical switch ports.

Adding Logical Switch Ports

Naturally, in order to connect to a logical switch, you need logical switch ports. You’ll add a logical switch port for each VM that needs to be connected to the logical switch.

To add a logical switch port, you’ll just click the Add button on the line for Switch Ports in the Summary of Logical Components section of the NVP Manager Dashboard. To create a logical switch port, you’ll need to provide the following information:

  1. You’ll need to select the logical switch to which this port will be added. The drop-down list will show all the logical switches; once one is selected that switch’s UUID will automatically populate.
  2. The switch port needs a display name, and (optionally) one or more tags.
  3. In the Properties section, you can select a port number (leave blank for the next port), whether the port is administratively enabled, and whether or not there is a logical queue you’d like to assign (queues are used for QoS; leave it blank for no queue/no QoS).
  4. If you want to mirror traffic from one port to another, the Mirror Ports section is where you’ll configure that. Otherwise, just leave it all blank.
  5. The Attachment section is where you “plug” something into this logical switch port. I’ll come back to this—for now, just leave it blank.
  6. Under Port Security you can specify what address pairs are allowed to communicate with this port.
  7. Finally, under Security Profiles, you can attach an existing security profile to this logical port. Security profiles allow you to create ingress/egress access-control lists (ACLs) that are applied to logical switch ports.

In many cases, all you’ll need is the logical switch name, the display name for this logical switch port, and the attachment information. Speaking of attachment information, let’s take a closer look at attachments.

Editing Logical Switch Port Attachment

As I mentioned earlier, the attachment configuration is what “plugs” something into the logical switch. NVP logical switch ports support 6 different types of attachment:

  • None is exactly that—nothing. No attachment means an empty logical port.
  • VIF is used for connecting VMs to the logical switch.
  • Extended Network Bridge is a deprecated option for an older method of bridging logical and physical space. This has been replaced by L2 Gateway (below) and should not be used. (It will likely be removed in future NVP releases.)
  • Multi-Domain Interconnect (MDI) is used in specific configurations where you are federating multiple NVP domains.
  • L2 Gateway is used for connecting an L2 gateway service to the logical switch (this allows you to bring physical network space into logical network space). This is one I’ll discuss later when I talk about L2 gateways.
  • Patch is used to connect a logical switch to a logical router. I’ll discuss this in greater detail when I get around to talking about logical routing.

For now, I’m just going to focus on attaching VMs to the logical switch port, so you’ll only need to worry about the VIF attachment type. However, before we can attach a VM to the logical switch, you’ll first need a VM powered on and attached to the integration bridge. (Hint: If you’re using KVM, use virsh start <VM name> to start the VM. Or just read this.)

Once you have a VM powered on, you’ll need to be sure you know the specific OVS port on that hypervisor to which the VM is attached. To do that, you would use ovs-vsctl show to get a list of the VM ports (typically designated as “vnet_X_”), and then use ovs-vsctl list port vnetX to get specific details about that port. Here’s the output you might get from that command:

In particular, note the external_ids row, where it stores the MAC address of the attached VM. You can use this to ensure you know which VM is mapped to which OVS port.

Once you have the mapping information, you can go back to NVP Manager, select Network Components > Logical Switch Ports from the menu, and then highlight the empty logical switch port you’d like to edit. There is a gear icon at the far right of the row; click that and select Edit. Then click “4. Attachment” to edit the attachment type for that particular logical switch port. From there, it’s pretty straightforward:

  1. Select “VIF” from the Attachment Type drop-down.
  2. Select your specific hypervisor (must already be attached to NVP per part 4) from the Hypervisor drop-down.
  3. Select the OVS port (which you verified just a moment ago) using the VIF drop-down.

Click Save, and that’s it—your VM is now attached to an NVP logical network! A single VM attached to a logical network all by itself is no fun, so repeat this process (start up VM if not already running, verify OVS port, create logical switch port [if needed], edit attachment) to attach a few other VMs to the same logical network. Just for fun, be sure that at least one of the other VMs is on a different hypervisor—this will ensure that you have an overlay tunnel created between the hypervisors. That’s something I’ll be discussing in a near-future post (possibly part 6, maybe part 7).

Once your VMs are attached to the logical network, assign IP addresses to them (there’s no DCHP in your logical network, unless you installed a DHCP server on one of your VMs) and test connectivity. If everything went as expected, you should be able to ping VMs, SSH from one to another, etc., all within the confines of the new NVP logical network you just created.

There’s so much more to show you yet, but I’ll wrap this up here—this post is already way too long. Feel free to post any questions, corrections, or clarifications in the comments below. Courteous comments (with vendor disclosure, where applicable) are always welcome!

This article was originally posted on blog.scottlowe.org. Visit the site for more information on virtualization, servers, storage, and other enterprise technologies.

Learning NVP, Part 5: Creating a Logical Network

Similar Posts: