Shared posts

03 Feb 02:50

The Two Rubensteins

by Michael Baumann
Tommy Gilligan-USA TODAY Sports

Under John Angelos, son of Peter Angelos, who took the team to the verge of the World Series in the 1990s, the Baltimore Orioles have been hamstrung by a lack of investment, uncertainty over potential relocation, and a lawsuit over control of the family fortune that contains allegations straight out of the Book of Genesis. A 100-win team with a once-in-a-generation core of up-the-middle talent has had its wings clipped by an owner whose picture should be on the Wikipedia page for Hanlon’s Razor.

Well, the O’s are finally getting out of purgatory. Angelos has agreed to sell the team to a group fronted by billionaire David Rubenstein. The new owners will reportedly purchase 40% of the team now, with Rubenstein replacing John Angelos as the Orioles’ control person once the sale goes through. His group will then have the option to buy full control at a later date.

Even before the sale is official, the Orioles have solved their no. 1 glaring weakness by acquiring Corbin Burnes for a draft pick and two players they didn’t really need. It’s as if the mere mention of Rubenstein changed the omens around the ballclub.

Rubenstein made his money as the co-founder of The Carlyle Group, a private equity firm. He now leads a syndicate that includes as minority investors a number of other investment bankers, as well as former New York City Mayor Michael Bloomberg, former Baltimore Mayor Kurt Schmoke, basketball Hall of Famer Grant Hill, and a local businessman named Cal Rikpen Jr. The Angelos family will retain a minority share as well, at least until the elder Angelos dies.

How Rubenstein will run the team remains, at this juncture, an open question. The 74-year-old was born and educated in Baltimore, and is a lifelong Orioles fan. But he also spent 30 years making billions of dollars in an industry that’s ordinarily antithetical to the kind of civic obligation, humility, and long-term thinking required of a successful sports owner. As a rule, I’m skeptical of any sports owner with Rubenstein’s professional background. When the Marlins rid themselves of hated owner Jeffrey Loria in the last decade, his replacement — the private equity billionaire Bruce Sherman —actually made things worse.

The Orioles are getting out of purgatory. We’ll see which direction their new owner will take the team.

In Ken Burns’ Baseball, the prolific conservative columnist George Will offered a comment on the dawn of free agency that was ironic for both its content and its incisive brevity: “I happen to be a semi-Marxist in this field. I believe in the labor theory of value. The players are the labor. They create the economic value. They ought to get the lion’s share of the rewards.”

Unfortunately for Will, and Marx, and indeed the players, they must share those rewards with owners. Now, because baseball as a business generates some $10 billion a year, those rewards are rich enough for owners to pay their (admittedly enormous) labor costs, put some in the bank, and to reinvest a big chunk of change into improving the team. A healthy team with a competent owner will find room to do all three. These teams are such bulletproof investments that even an incompetent owner will have no problem at all finding financing or, in the worst case, a buyer who will take the team off their hands for a handsome profit.

But an incompetent owner, or worse, an inattentive or disinterested one, will take in all the revenue from fans, all the labor generated by the players, and hoard it. Major league payroll will stagnate, facilities and infrastructure will be neglected, scouting and analytics departments will fall behind the times, and the team will lose 90 games a year.

Clever GM-ing and homegrown talent can only take a team so far when ownership is pulling money out of the organization. Never mind that clubs with tightfisted or meddlesome owners are less likely to attract clever executives or develop prospects in the first place. Being cheap does not magically make your employees smarter.

Talented major leaguers, savvy baseball ops people, top-end coaches and training equipment — a sufficiently engaged and deep-pocketed owner can just buy all of that. Wins do not follow automatically from good funding and sound process, but they’re much easier to achieve.

I’m sure you all remember how bored baseball people got during the 2021-22 lockout. (I wasn’t covering baseball exclusively back then, but at one point things got so slow I watched and ranked 13 different film adaptations of Cyrano de Bergerac over the course of a week. We were so bored.) The folks over at The Athletic came up with a more constructive way to keep the content engine ticking over: An MLB fantasy draft, of sorts.

Group projects like this are almost always interesting to play along with, and usually generate some new insight for the reader. But while most fantasy drafts are strictly about ranking players, The Athletic broadened the scope of the draft to include managers, markets, ballparks, even owners.

With the first pick, Dodgers beat writer Fabian Ardaya picked Dodgers owner Mark Walter, which is exactly what I would have done in his shoes. Five owners went off the board before Shohei Ohtani, and 11 owners went off the board in the first round. Seven of the remaining 19 picks were home cities.

That speaks to the importance of structural factors in team success. You can’t buy a championship, but you can absolutely buy a contender. As much as I love Baltimore, it’s not as glitzy as New York or Los Angeles. But most players want two things above all: They want to be paid well, and they want to win. When the older Angelos was funding a regular championship contender in the 1990s, the Orioles had no trouble attracting and retaining star players. Engaged ownership helped the Padres and Rangers speed-run their own rebuilds. The owner matters an enormous amount, because everything that makes a baseball team run smoothy is downstream of that person’s whims.

Rubenstein is a fascinating ownership candidate because his background points to two extremes. The engaged, committed local businessman is the absolute best type of sports owner. But the private equity billionaire is the worst, not just for a sports team, but for any company.

Let’s get the bad news out of the way first. Understanding what private equity investment does, and the way it impacts our daily lives, is going to be a lot easier once Megan Greenwell’s book on the subject comes out next year. Until then, I’ll do the best I can.

Firms like The Carlyle Group make their money by buying up companies they view as insufficiently profitable, and turning them into something they can sell at a higher share price. Frequently, that means targeting a company — like a newspaper, or a retail store, or a baseball team — that has brand value and owns useful real estate, and slashing costs wherever possible in order to achieve greater return on investment.

And because rich people don’t risk their own money if they don’t have to, these business acquisitions are usually highly leveraged. That means that the new owners demand greater profits on top of the money to pay back the loans they took out to buy the company in the first place.

That means laying off or underpaying workers, skimping on safety and build quality, and selling off real estate while maintaining or even raising the price of the product. David Roth’s recent Defector essay, “How Will the Golden Age of ‘Making It Worse’ End?” is not strictly a private equity story, but he offers a useful summary of the life cycle of a company under this type of ownership: “Management’s quest to see how much more cheaply an increasingly poor product can be sold at the same price and under the same name as what came before is, at bottom, the story of basically every industry or institution currently in decline or collapse.”

Normal businesses make things of value — either goods or services — and because their viability depends on the willingness of customers to buy what they’re selling, those goods and/or services have to create something worthwhile. A baseball team, or a newspaper, or a car company, is founded by people who are interested in the creation of the worthwhile thing, of making a widget to fill a need. Private equity sees only the value in the company, and desires to extract as much of the former from the latter as possible, even if it means disposing of the people and processes that made the company valuable in the first place. It is teleologically opposed to creation or innovation.

I have a hard enough time avoiding 100-word sentences even when I’m not quoting Will and Roth, so I’ll just spell out the worst-case scenario for anyone whose eyes have glazed over and is in need of a strict example from the world of baseball.

We know what it looks like when the owner of a baseball team is in it to cash revenue sharing checks and is indifferent to the on-field product: the Pirates, the Reds, the Guardians, the Marlins, the Athletics. Insofar as these owners care whether their teams win or lose, they do so under the assumption that their players and front office personnel can outperform better-spending teams by working harder or being more clever.

On rare occasions, it works. But these owners are under the mistaken impression that spending less money does inherently makes your employees smarter.

Here’s the good news, for the Orioles, if not for society.

Sports teams are such great investment vehicles because they don’t operate in a free market. If we spent money on baseball as a pastime according to quality of product and value for money, there would not be a single Pittsburgh Pirates customer in existence. They all would’ve started buying baseball from another club decades ago.

Instead, ballclubs have fans. Both company and consumer have grafted local tribalism into their very identity, so the one is inextricable from the other and bound to personal self-image. I am my community, and my community is my team, so I am my team. That’s how bad owners bilk fans out of hundreds or even thousands of dollars, in the interest of supporting a club that provides nothing in terms of civic pride. Most fans of perpetual last-place teams would no sooner abandon their baseball allegiance than their religion or their family. That’s how deep this adherence, however irrational, takes root.

Some people never outgrow those attachments, no matter how rich they become. And on rare occasions, those people end up with billions of dollars to spend when their childhood ballclub becomes available.

Mets owner Steve Cohen is such a person, and while his project has yet to bear fruit, it’s not for lack of effort. The Mets are better-funded and better-managed than they ever were under the Wilpons. Sometimes, a love of the sport itself is enough. The late Peter Seidler made his money in private equity, then bought into the Padres alongside his uncle, former Dodgers chairman Peter O’Malley. Seidler became one of the most popular owners in baseball after he funded San Diego’s most successful team in more than 20 years.

Rubenstein might turn out to understand, as Cohen and Seidler did, that his wealth is not an empirical measure of his own wherewithal, but an opportunity to live out a boyhood fantasy. A profile in the Baltimore Sun from earlier this week paints about as optimistic a picture of the new owner as you could imagine. Drawing on interviews with business associates and previous statements by Rubenstein himself, the article depicts Rubenstein as a lifelong baseball fan and a proud Baltimorean who had long dreamed of owning the Orioles.

Until very recently, Rubenstein was chairman of the board of trustees of the Kennedy Center, and the center’s president described him as involved and attentive. A Carlyle Group senior adviser was quoted as saying, “He’ll have a strategy, which is one of his strong points. He’ll be deeply engaged. But I wouldn’t expect to see him showing up with the lineup cards.”

Deep-pocketed, engaged, but not micromanaging is basically the ideal sports owner. After suffering through Peter Angelos’ later years, followed by his sons’ fractious stewardship, Orioles fans must be gnawing through their rope at even the potential for an owner like that.

They deserve it, this local boy-made-good, this Cohen-in-miniature, that Rubenstein could become. Every fan base does. But Rubenstein’s stewardship must be judged by his actions. And we’re in luck: The Burnes trade offers Rubenstein a pivotal test right off the bat. Burnes immediately becomes Baltimore’s best starting pitcher since Mike Mussina. (At the risk of offending those Erik Bedard sickos whom I know are out there somewhere.) And with only one year left until free agency, the Orioles have an opportunity to sign Burnes to an extension, but not much time to do it. Whether or not they pull it off, or at least come close, will send a powerful signal to the fans, as well as the likes of Adley Rutschman and Gunnar Henderson, about the new owner’s intentions.

We’ll see how far Rubenstein is willing to go. And only with time will we know which owner the Orioles are getting: creator or destructor. Fan, or financier.

Source

08 Aug 03:21

Woman Unwittingly Bought Back the Pants She Had Donated to a Goodwill Store For $8

by Matthew Gilligan

TikTokGoodwillBuyback Woman Unwittingly Bought Back the Pants She Had Donated to a Goodwill Store For $8

A woman shared a video on TikTok where she told viewers that she bought a pair of pants for $8 from a Goodwill store and then realized that it was a piece of clothing that she’d actually donated three weeks earlier.

Well, that’s unusual!

Screen Shot 2023 07 31 at 2.15.58 PM Woman Unwittingly Bought Back the Pants She Had Donated to a Goodwill Store For $8

Photo Credit: TikTok

Her text overlay reads, “Middle Age Mom Update: I’m pretty sure I just paid $8 for a pair of pants I donated 3 weeks ago.”

Screen Shot 2023 07 31 at 2.16.05 PM Woman Unwittingly Bought Back the Pants She Had Donated to a Goodwill Store For $8

Photo Credit: TikTok

And in her caption, she wrote, “But at least my daughter gets her LuLulemons and my son has a new $600 bat.”

Screen Shot 2023 07 31 at 2.16.11 PM Woman Unwittingly Bought Back the Pants She Had Donated to a Goodwill Store For $8

Photo Credit: TikTok

Let’s check out her video.

@thenewstepford But at least my daughter gets her LuLulemons and my son has a new $600 bat. #funnymomsoftiktok #momproblems #fyp #goodwill ♬ original sound – The New Stepford

And here’s how people responded to it.

One person just had to go back and buy a lamp they had donated.

Screen Shot 2023 07 31 at 2.16.28 PM Woman Unwittingly Bought Back the Pants She Had Donated to a Goodwill Store For $8

Photo Credit: TikTok

Another viewer said they owned a thrift store and this happened all the time.

Screen Shot 2023 07 31 at 2.16.35 PM Woman Unwittingly Bought Back the Pants She Had Donated to a Goodwill Store For $8

Photo Credit: TikTok

And one TikTokker said their mom accidentally donated their favorite jeans so they had to go buy them back.

Screen Shot 2023 07 31 at 2.16.44 PM Woman Unwittingly Bought Back the Pants She Had Donated to a Goodwill Store For $8

Photo Credit: TikTok

All’s well that ends well!

14 Apr 18:28

Here’s Why The Robots At Chuck E. Cheese Are Still Powered By Floppy Disks.

by Trisha Leigh

Businesses that have been around since before technology really took off have had to make a lot of changes over the years in order to not only stay relevant, but to keep their own now-outdated technology working.

While a lot of them enjoy having the latest and best of everything, others – like Chuck E. Cheese, for example, seem to live by the mantra “if it ain’t broke, don’t fix it.”

Specifically, we’re talking about their animatronic rodent-bots (which you probably recall from your childhood nightmares), which are still controlled by actual floppy disks.

There are more than 600 Chuck E. Cheese locations all over the world, but fewer than 50 still run the old school animatronics, which were created by “Studio C.”

According to one anonymous employee, that’s because it still functions just fine for their purposes.

“The floppies work surprisingly well. The animatronic, lighting, and show sync data are all in the floppy disks. I’ve seen a few of the newer Studio C Chuck E.’s run on flash drive/SD card combo. But usually, newer setups cause issues with stuff, and it’s easier to just keep the old stuff running.”

Other surprising things still run (or have recently still run) on floppy disks, too – like nuclear weapons code storage, Boeing 747s, and the entire San Fran public transit system.

iStock 626658668 Heres Why The Robots At Chuck E. Cheese Are Still Powered By Floppy Disks.

Image Credit: iStock

Yes, really.

Tom Persky, the owner of floppydisk.com, says that’s because the old tech has more than a few benefits.

“If you’re looking for something very stable, really non-hackable – floppy disks are not internet-based, not network-based.

It’s quite elegant for what it does.”

I think there are more than a few reasons that a bunch of people aren’t going to switch back anytime soon, but the logic does make sense.

Chuck E. Cheese Animatronics  1 Stage  Cantagallo Chile Heres Why The Robots At Chuck E. Cheese Are Still Powered By Floppy Disks.

Image Credit: Wikipedia

And this way, a whole new generation of kids can be traumatized by those robots.

Upside or downside? You decide.

twistedsifter on facebook Heres Why The Robots At Chuck E. Cheese Are Still Powered By Floppy Disks.

04 Aug 19:10

Calvin and Hobbes for Wednesday, August 03, 2022

by Bill Watterson
19 Apr 17:51

Raw data: Cumulative excess deaths from COVID-19

by Kevin Drum

I haven't done a COVID-19 post for a little while, so I decided to use my dex-driven insomnia to do one tonight. Warning: I got pulled down a bit of a rabbit hole.

This time I calculated excess deaths per country, which is generally considered a more accurate metric than official figures for COVID death rates. Here are cumulative deaths per million since the start of the pandemic:

There are no big surprises here. The United States is highest and the Nordic countries are lowest. As usual, I'm comparing largeish, rich countries in Europe plus the US and Canada. The reason is that these countries are all culturally similar and economically similar, which means they could be expected to have fairly similar resources for responding to COVID.

But then I decided to look at all these countries by how far north they are. This is because there's a common belief that colder countries have lower natural rates of COVID spread. I used the population center of each country, not the geographical center, since viruses obviously spread among people, not trees or wheat fields. Here it is:

The trendline that showed the closest fit was a power curve, which had a surprisingly high R-squared of 0.58. This means that latitude explains 58% of the variability in COVID death rates.

As an aside, there was very little difference in geographical center and population center. In some cases this was because the countries were too small to allow much of a difference in the first place (Belgium, Switzerland) and in other cases it was because the population was fairly centrally dispersed (US, Germany). The country where it had the biggest impact was Canada, where the population center is near Toronto, which is almost literally the southernmost point in the country. The most surprising countries were Norway, Sweden, and Finland, which are all tall, thin countries with big populations in their southern climes but have a difference of only a degree or two between their population and population centers.

The first thing that comes to mind when you see this chart is how far each country is from the trendline. So I checked that out:

The winner (i.e., loser) is the UK, which has a death rate far higher than it should. The US, by contrast, is fairly average. We have a high death rate, but we're also the farthest south of every comparison country (even farther south than Greece).

Using this metric, Sweden looks worse than its raw count suggests, as does Spain. Canada is by far the best, along with Norway and Denmark. Germany and Switzerland, which usually get good marks, turn out to be only average considering their location.

Facebook Twitter Email Print
06 Feb 20:45

Wordle the Wolfram Way

by David Reiss

Wordle the Wolfram Way

The free online game Wordle has come at a time when each of us needs a simple, friendly challenge to take our minds off other worldly issues.

Josh Wardle created Wordle for his partner and then shared it with the world in October 2021. Then, news broke on January 31, 2022, that he had sold the application to the New York Times, who, it is expected, will eventually put it behind a paywall.

The concept is simple and engaging: you are challenged to guess a five-letter word in six guesses. Here are the rules from the Wordle website:

How to play Wordle

Posting only one new Wordle challenge a day is a sensible design choice—it gives you a new game once a day and protects you from any tendencies you might have to get stuck playing it over and over and over…

(That’s the idea at least.)

The Challenge of a Slow Weekend

In early January, I was texting with my daughter, who had introduced me to Wordle a few days earlier:

First text message

My colleagues and I at Wolfram Solutions build some pretty large and complex user interfaces for our customers. (A recent one came to about 25 thousand lines of Wolfram Language code just for the user interface portion alone.) So I am pretty confident when it comes to putting together a UI quickly. I decided to take up the challenge in order to keep myself busy during that slow weekend.

A few hours later, I texted my daughter back with an initial version:

Second text message

As you can see, I am a bit spelling-challenged. (This figures later in the story, along with the fact that my daughter is a speech pathologist….)

Now, in creating this, right away I violated the sacrosanct principle of “only one Wordle a day.” All I can hope is that anyone who gets hooked on this will forgive me.

I wrote a post at Wolfram Community to share the code with others, and also so that they could play with the code as well as the application itself. (You can read the full code and download the package from that post, as well as see some other folks’ comments.) It’s also an example of one way to design and code a Wolfram Language package.

Over the next week, I spent a little time tweaking the application to let the user choose which part of speech the word is restricted to, as well as to give them the choice of whether the length of the word is 4, 5, 6 or 7 characters long.

MWordle interface

(In the original version of this GIF on Wolfram Community, the application had “speech” misspelled as “speach”; given that my daughter is a speech pathologist, I should have known better! And it’s not her fault that she didn’t tell me of the error because I hadn’t yet given her the updated version.)

These additional bits of functionality, as compared to the original Wordle, are out of line with the (brilliant) simplicity of its design. But it makes the point that both the algorithmic and user interface capabilities of the Wolfram Language let you explore variations and approaches to your heart’s content.

The MWordle.m package comes to a bit more than four hundred lines of Wolfram Language code. The JavaScript code of the web version is quite a bit more than this, but to be fair, it has more functionality than the Wolfram Language facsimile that I wrote. Using the Wolfram Language’s vast resources, you can customize, revise and debug versions ad infinitum and very efficiently. Though it was a slow weekend, after I created the first version of this, I still had plenty of time to do my laundry and catch up on Netflix, then come back to tweak the code.

And More…

Aside from creating the facsimile, interesting questions come up about strategies for playing the Wordle game. As expected, there is much chatter on the internet about this. What are the best words to use when making your first guess? How can you optimize subsequent guesses? And so on and so on….

Arnoud Buzing quickly created a ResourceObject that contains the actual word list that the web version of Wordle uses. (My code uses Mathematica’s dictionary through the WordData function.)

While I personally prefer to leave a patina of mystery to playing the game and approach the online version as if I were an algorithmically naive person, it’s incredibly straightforward to explore it with the Wolfram Language. Like many other things, a simple game like Wordle can be viewed as a starting point for exploring a particular computational world.

Here are some examples. Someone in my Wolfram Community post said, “I was thinking it would be good to compute what the best starting word would be for Wordle based on WordData, letter frequencies and letter position frequencies.”

So I took up the challenge and wrote the following as an example of one possible starting point—initially without taking into account the positions of the letters.

Here are all the five-letter words used in the application:

$fiveLetterWords = Module
&#10005

There are 7,517 of them:

Length
&#10005

Here is the ordering of the frequency of English letters for these five-letter words:

$orderedLetters = Keys
&#10005

So, let’s see if there are any words among the list of these five-letter words that match the highest five of the frequency-sorted letters (and requiring that there be no repeated letters in the word):

StringJoin /@ Select
&#10005

Wow, there’s only one! And it takes care of another possible approach—try to have as many vowels as possible.

Let’s relax the constraint slightly and pull things from the highest nLetters characters in the frequency-ordered list, but still make sure that there are no repeating letters:

findWordlePossibilities
&#10005

From the top five letters, as before:

findWordlePossibilities
&#10005

From the top six letters:

findWordlePossibilities
&#10005

From the top seven letters:

findWordlePossibilities
&#10005

Arnoud also wrote a post about an approach to optimizing the guesses that you might make as you work through a Wordle challenge. In it, he takes into account the letter frequencies based on the positions of letters in the word.

Peter Barendse suggested that code like the sort I have in the MWordle application could be used to train an intelligent agent to play Wordle. That’d be fun to do.

The possibilities are endless, and the Wolfram Language is the ideal vehicle for exploring the computational world of Wordle.

Visit Wolfram Community or the Wolfram Function Repository to embark on your own computational adventures!
13 Nov 17:30

This Simple Trick Will Help You Achieve Baked Potato Perfection

by Laurel Randolph
One step might surprise you. READ MORE...
25 Oct 18:31

Comic for 2021.10.22

New Cyanide and Happiness Comic
21 Oct 01:33

Cancel Columbus Day: Sun storms pinpoint Europeans being in Canada in 1021 A.D.

by Phil Plait

New research pinpoints an exact date Vikings from Europe were in North America: 1021 A.D. (one millennium ago this year), 430 years before Christopher Columbus was even born.

How was this determination possible? Because the Sun erupted in an immense series of storms that altered Earth's atmosphere, leaving measurable changes in tree rings at the time.

In the 1960s, archaeologists discovered a Norse Viking site in L'Anse aux Meadows, in the northern tip of Newfoundland. By looking at the styles of the remains there as well as examining Icelandic sagas (oral stories told over generations, and not written down until centuries later), they were able to get a rough date of the site of around 1000 A.D. But this is a relative date, found by comparing events at different times (so, say, they could tell it was after some other event, but not exactly how long after).

L’Anse aux Meadows is a Norse Viking site in Newfoundland, Canada, now dated to 1021A.D. Credit: Google Maps

This is where the Sun comes in. In the years 774 and 993 the Sun blasted out some truly epic explosions called solar storms, most likely in the form of solar flares. These are spectacular events that occur when the Sun's magnetic field lines get tangled up and suddenly short circuit, releasing their stored-up energy. This can easily equal hundreds of millions of times the energy released in a one-megaton nuke — the 774 A.D. event was the biggest in 10,000 years and was strong enough to power all human energy use for 300,000 years!

These flares send out huge waves of subatomic particles into space. When they slam into Earth's atmosphere they can generate aurorae and power outages, but they also create isotopes of certain atoms — for example, the majority of carbon on Earth has six protons and six neutrons in the atomic nucleus (called carbon-12), but a flare can increase the amount of the isotope carbon-14, which has 8 neutrons. The relative amounts of these isotopes can be measured in the lab; tree rings around the world were seen to have elevated amounts in both 774 and 993 A.D. (by 12 and 9%, respectively), and in fact those solar storms were first discovered by examining tree rings.

As you may know, a tree grows one ring per year, allowing the tree's age to be measured (this science is called dendrochronology). It can also be used to determine when the tree was felled, if the outer bark edge is still intact — somewhat amusingly, this is called the waney edge.

A team of scientists looked at wood found at the L'Anse aux Meadows Viking site. In three cases the trees had been physically cut down, and moreover, they were clearly cut with metal tools — Vikings had metal implements at the time, but indigenous people did not. The wood was all from different trees (one was fir, and another juniper, for example). The key parts here are that the wood was all from trees that had been alive for many decades, and all had their waney edge intact as well.

Microscopic photo of tree rings from a sample of wood taken at a Norse Viking site in Newfoundland, Canada, used to get an exact date for when the tree was cut down: 1021 A.D. Credit: Petra Doeve, University of Groningen

The scientists extracted 127 samples from the wood, and 83 rings were examined. They used two methods to secure dates. The first was to compare the amount of carbon-14 in each ring with known atmospheric amounts from the time. This gives a rough date for the waney edge of the wood. They also then looked for an anomalous spike in carbon-14 in an inner ring, knowing this would have come from the 993 A.D. event, and then simply counted the rings outward from there to get the date of the waney edge.

In all three samples the waney edge was dated to the same year: 1021 A.D. This would be incredibly unlikely to occur at random.

This means that Vikings were definitely in North America, specifically Newfoundland, Canada, more than four and a half centuries before Columbus. And mind you this may not have been the first visit, just the first we have evidence for. So Vikings were there in 1021 A.D. at the latest.

In fact, looking at different kinds of cells in the wood the scientists could tell one tree was felled in the spring of that year, while another was in the summer/autumn, indicating the Vikings were there for several months.

An aurora — the northern lights — shimmer over an ancient Viking village in Iceland. Aurorae are caused by the solar wind impacting Earth, and can be very strong during solar storms. Credit: Fred Concha (www.fredconcha.com @ All Rights Reserved) via Getty Images

This is historically very interesting, especially since it's still erroneously taught in American schools that Columbus discovered America. That's ridiculous on its face, since he and his crew met indigenous natives*, and it's known the Vikings were in Canada long before. But now, with this new work, we know exactly when they were there.

There's a lot of scientific value to this as well. The sagas tell of Vikings peacefully meeting indigenous people in America, and there was likely an exchange of flora and fauna. They may have swapped pathogens, too, which is common in such events, and makes for interesting epidemiology. There may also have been some, ah, genetic material exchanges as well, though testing of the population in Norse Greenland doesn't show evidence of it. That doesn't preclude it, though, and knowing humans it does seem like something like that may have happened. I'd bet that way.

I've said many times that science is a tapestry. In this case the metaphor is strong: It took the wildly different fields of solar astronomy and magnetism, cosmochemistry, dendrochronology, and archaeology all woven together to examine the evidence that led to understanding when exactly Europeans visited North America.

Science! I love this stuff.


*Also, he is presented as some sort of noble explorer who proved the Earth was round. This is grossly wrong. The Earth had been known to be round for at least 1700 years before Columbus, and in fact he thought the Earth was far smaller than already known; he expected the trip to be short and packed provisions accordingly. If he hadn't stumbled on the Bahamas by accident his crew might've starved due to his error. Also, his stature of "noble" is, um, highly arguable.

23 Sep 01:37

How to make your job suck less by understanding what kind of jerk you are at work and what kind of work jerk most-easily persuades you

by David McRaney

In this live taping of the podcast, Dr. Tessa West, the author of Jerks at Work conducts quizzes to see what kind of jerk you are and what kind of jerk most-easily persuades you in the workplace. You will also learn how to counteract the behaviors of people who make work suck more than it should. And you’ll hear about research into remote-working, the best way to ask for a raise, networking, team building and more.

West is a leading expert on interpersonal interaction and communication and will explain how to make work suck less as we return to our offices and figure out how to balance working remotely with working in-person after a year of re-imagining what work even means. West’s new book is an exploration of all the psychological research into how and why gaslighters, bulldozers, neglectors, micromanagers and more do their thing in our workplaces and how to use what we know from decades of psychological research to counteract their Machiavellian machinations.

ABOUT JERKS AT WORK:

For anyone pulling their hair out over an irritating colleague who’s not technically breaking any rules, a hilarious guide to getting difficult people off your back from NYU psychology professor Tessa West

Ever watched a coworker charm the pants off management while showing a competitive, Machiavellian side to the lower ranks? The Kiss-Up/Kick-Down coworker doesn’t hesitate to throw peers under the bus, but their boss is oblivious to their bad behavior. What to do? In Jerks at Work, West draws on a decade of original research to profile classic workplace archetypes, including the Gaslighter, the Bulldozer, the Credit-Stealer, the Neglector, and the Micromanager, and gives advice to anyone who’s ever cried in a bathroom stall at the office.

West digs deep into the inner workings of each bad apple, exploring their motivations and insecurities–for instance, micromanagers develop compulsive habits due to poor managerial training and public shaming–and offers clever strategies for stopping each type of jerk in their tracks, such as:

• Bulldozers often gain extra influence in meetings by making sure they’re the first person to talk, even by saying “let’s start by all sharing our names,” which research shows portrays them as powerful. Don’t let them speak first!
• Kiss-Up/Kick-down coworkers are so endeared to their managers that, if you have to report them, do it in small doses over time–otherwise, you’ll trigger cognitive dissonance in your brainwashed boss.

Jerks at Work is the playbook that you wish you didn’t need but you’ll always turn to–and the answer to your endless “how to deal with a terrible boss” Google searches.

Links and Sources

19 Aug 01:28

New Methods for Computing Algebraic Integrals

by Sam Blake

New Methods for Computing Algebraic Integrals

In 2004, I became obsessed with computing integrals, using both elementary techniques known to calculus students and the Risch algorithm and its extensions by Davenport, Trager and Bronstein [1, 2, 3, 4, 5]. Taking inspiration from George Beck’s step-by-step differentiation program, WalkD, I decided to write a step-by-step program for computing indefinite integrals.

For example, it’s easy to write a simple set of replacement rules in Mathematica for integrating (expanded) polynomials:

linearityRule
&#10005

Format
&#10005

Using these rules, we can now integrate polynomials, for example :

integral
&#10005

(integral
&#10005

-x + (integral
&#10005

-x - x^2/2 + (integral
&#10005

After spending far too many late nights entering integration techniques as pattern-matching rules into Mathematica, I had the code at a reasonable state and I sent it to Wolfram Research for possible inclusion in Mathematica. Soon after, I was offered a job and ended up working for Wolfram for several years, predominantly on Wolfram|Alpha. My step-by-step integrator is still computing many integrals, some of which I have most likely forgotten how to do myself.

As an aside, the idea of using rule-based programming to compute indefinite integrals dates back to 1961, with the Symbolic Automatic Integrator (SAINT) by James Slagle at MIT. SAINT could solve “symbolic integration problems approximately at the level of a good college freshman and, in fact, uses many of the same methods (including heuristics) used by a freshman” [6]. The step-by-step integrator I wrote used around 350 rules and could integrate more than 99% of integrals in calculus textbooks. Since then, Albert Rich used Mathematica to create the Rule-based Integrator (Rubi), which uses over 6,700 rules.

Fast forward to 2020 and I hadn’t looked at integrals for a decade. I decided to see what advances had been made in the last 10 or so years. I was particularly interested in a Gröbner basis–based algorithm developed by Manuel Kauers and its extensions by Brian Miller that could seemingly outperform the algebraic case of the Risch algorithm in the AXIOM computer algebra system on many integrals [7, 8]. For example:

IntegrateKauers
&#10005

It’s trivial to check the result:

D
&#10005

Once again, I quickly became hooked on integrals, or more specifically, algorithmic solutions to indefinite integrals.

When I last looked at symbolic integration, I was interested in the transcendental case of the Risch algorithm, of which Mathematica has a near-complete implementation. For example, the following is a simple integral for the transcendental case of the Risch algorithm:

Integrate
&#10005

I became more interested with algebraic integrals, which cannot be integrated with the transcendental Risch algorithm. The algebraic case of the Risch algorithm is considerably more complex than the transcendental case and has not been completely implemented in any computer algebra system.

I initially considered an algebraic integral that appears in many calculus textbooks:

Algebraic integral
&#10005

If we’re happy to play it fast and loose with branch cuts, then we can write this integral as:

Algebraic integral
&#10005

For this integral, we can substitute and we get:

Algebraic integral
&#10005

Substituting back, we get:

Algebraic integral
&#10005

This answer has branch cut issues; however, we can fix this by writing as . Then we have the correct antiderivative:

Antiderivative
&#10005

I wondered if this method of using a Laurent polynomial substitution to simplify an algebraic integral was just a trick that worked for this integral or a hint to a more general method. It turns out this trick works for many integrals; for example, the integral we tried previously on Kauers’s algorithm

Algebraic integral
&#10005

can be reduced to

Algebraic integral
&#10005

with the substitution u = x4 + x–4. Once corrected for branch cut issues, the solution is given by:

Algebraic integral
&#10005

A general method would seek a substitution of the form u = such that

Algebraic integral
&#10005

where R1(u), R2(u) are rational functions of u and are undetermined coefficients.

We start by using SolveAlways to compute the undetermined coefficients in the u substitution:

Compute undetermined coefficients
&#10005

So we have a candidate substitution that fits the radicand part of the integrand. Does this substitution fit the rest of the integrand? We can compute this as follows:

GroebnerBasis
&#10005

We have made the same simplification to the integral that we made by hand previously, namely

Algebraic integral
&#10005

where u = .

This method is implemented with IntegrateAlgebraic at the Wolfram Function Repository. (In 2020, I further investigated the computation of pseudo-elliptic integrals in terms of elementary functions [9].) Given the simplicity of this method, it can integrate a wide range of integrals.

Here are some examples:

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

Unlike the algebraic case of the Risch algorithm, this technique can quickly solve many integrals involving parameters:

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

What about the following integral?

Algebraic integral
&#10005

Unlike the previous examples, this integral is not solvable with a Laurent polynomial substitution.

In 1882, Günther developed a method for computing some otherwise difficult algebraic integrals [10]. Given the integral

Algebraic integral
&#10005

where p(x) and q(x) are polynomials in x, Günther made the substitution

Substitution
&#10005

such that the integral becomes

Algebraic integral
&#10005

where s(x) = vQP+R–1 xQP+R–1 + vQP+R–2 xQP+R–2 + … + v1 x + v0, where P = degx(p(x)), Q = degx(q(x)), R = degx(r(x)) and v0, v1, …, vQP+R–1, c0, c1,c2, c3 are undetermined coefficients and R(un) is a rational function in un with undetermined coefficients.

We can use Günther’s method to solve this integral in Mathematica as follows. The substitution is of the form:

Substitution
&#10005

And we assume the integrand in u is of the form:

Integrand
&#10005

Then the integrand in x is given by:

intx = D
&#10005

Now we need to solve for the undetermined coefficients in the substitution (v0, v1, v2) and in the rational integrand (w0, w1):

Solve for undetermined coefficients
&#10005

We can substitute this solution into our integrand in u and substitution:

Integrand
&#10005

Now we can use Integrate to compute the resulting integral:

Algebraic integral
&#10005

Then substitute back to solve our original integral:

Algebraic integral
&#10005

A quick check that our solution is correct:

Check results
&#10005

A generalized version of Günther’s method is implemented in IntegrateAlgebraic. This method can solve many otherwise difficult integrals. Here are some examples:

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

This method also handles integrals containing parameters:

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

If we combine this method with integrating term by term after expanding the integrand into a sum of terms, we can handle more exotic algebraic integrals:

IntegrateAlgebraic
&#10005

Combining the Laurent polynomial substitution method with the generalized Günther method and integrating term by term allows us to compute even more complex integrals:

IntegrateAlgebraic
&#10005

In this case, we wrote the integral as:

Algebraic integral
&#10005

Then the integral was reduced to with the substitution u = 1 – x3, while the integral was reduced to with the substitution s = .

Integrating expressions containing nested radicals has always been a tricky business. A well-known example is

Algebraic integral
&#10005

which can be computed using the substitution . We can make this substitution with GroebnerBasis as follows:

GroebnerBasis
&#10005

We need to express this relationship in terms of Dt[y]:

Solve
&#10005

We can now integrate the rational function of u and substitute back for x:

Integrate
&#10005

This method can solve much more difficult integrals involving nested radicals. For example:

GroebnerBasis
&#10005

Integrate
&#10005

A generalization of this approach is used within IntegrateAlgebraic. Here are some challenging examples:

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

Like the other methods in IntegrateAlgebraic, we readily handle integrals involving parameters:

IntegrateAlgebraic
&#10005

All of the integrals in this post contain polynomial radicands; however, these methods generalize to rational radicands. For example:

IntegrateAlgebraic
&#10005

IntegrateAlgebraic
&#10005

There are still many algebraic integrals that these methods will not compute. For example, the following integral possesses an elementary (albeit enormous) solution:

Algebraic integral
&#10005

However, compared to the algebraic case of the Risch–Trager–Bronstein algorithm, which is not completely implemented in any computer algebra system, these methods are fast, simple and complement the existing integration capabilities of Mathematica’s Integrate function. We are currently considering including IntegrateAlgebraic within Integrate in an upcoming release.

References

[1] R. Risch, “The Problem of Integration in Finite Terms,” American Mathematical Society, 139(1), 1969 pp. 167–189.

[2] R. Risch, “The Solution of the Problem of Integration in Finite Terms,” Bulletin of the American Mathematical Society, 76(3), 1970 pp. 605–608.

[3] J. Davenport, “Integration of Algebraic Functions,” in EUROSAM ’79: Proceedings of the International Symposium on on Symbolic and Algebraic Computation, 1979 pp. 415–425.

[4] M. Bronstein, “Integration of Elementary Functions,” Journal of Symbolic Computation, 9(2), 1990 pp. 117–173.

[5] B. Trager, “Integration of Algebraic Functions,” dissertation, Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1984.

[6] J. Slagle, “A Heuristic Program That Solves Symbolic Integration Problems in Freshman Calculus: Symbolic Automatic Integrator (SAINT),” dissertation, Massachusetts Institute of Technology, Dept. of Mathematics, 1961.

[7] M. Kauers, “Integration of Algebraic Functions: A Simple Heuristic for Finding the Logarithmic Part,” ISSAC ’08, 2008 pp. 133–140.

[8] B. Miller, “On the Integration of Elementary Functions: Computing the Logarithmic Part,” dissertation, Texas Tech University, Dept. of Mathematics and Statistics, 2012.

[9] S. Blake, “A Simple Method for Computing Some Pseudo-elliptic Integrals in Terms of Elementary Functions,” 2020. arxiv.org/abs/2004.04910.

[10] S. Günther, “Sur l’évaluation de certaines intégrales pseudo-elliptiques,” Bulletin de la Société Mathématique de France, 10, 1882 pp. 88–97. www.numdam.org/article/BSMF_1882__ 10__ 88_ 1.pdf.

Get full access to the latest Wolfram Language functionality with a Mathematica 12.3 or Wolfram|One trial.
12 Feb 19:17

Comic for 2021.02.12

New Cyanide and Happiness Comic
24 Jul 18:09

Fairytale NEOWISE

Comet dust falls through a twilight sky in this dream-like scene, Comet dust falls through a twilight sky in this dream-like scene,


12 Jun 18:34

Long Gone Summer Revisits the Great (?) 1998 Home Run Chase

by Jay Jaffe

It’s not the year of a round-numbered anniversary, but as it’s a time without major league baseball, it will do. On Sunday at 9 pm ET, ESPN will air its premiere of Long Gone Summer, a 30 for 30 documentary on the 1998 home run race between Mark McGwire and Sammy Sosa as they vied to break Roger Maris‘ single-season mark of 61 homers, which had stood since 1961. While subsequent allegations concerning performance-enhancing drugs have dulled the luster of the two sluggers’ astronomical totals — 70 for McGwire, 66 for Sosa — director AJ Schnack is far less interested in singling out the pair for scolding than in reliving the excitement of the race, and the camaraderie of the two rivals, which isn’t to say that the topic of PEDs goes unaddressed.

Indeed, Schnack, an award-winning filmmaker whose previous credits include documentaries about They Might be Giants and Kurt Cobain, has gone against the industry grain at least somewhat in making the movie. As he told Uproxx’s Mike Adams this week:

I grew up outside St. Louis, also went to Mizzou. I was a Cardinal fan. That summer really reconnected me with my childhood experience of enjoying sports and enjoying baseball, driving around with my dad, listening to Jack Buck and Mike Shannon on the radio. And when that summer happened, I’d moved to L.A. I was starting to work in film, and it just reconnected me with all of those feelings and the emotions and the excitement that I felt about baseball. So I felt like, yes, we now know that that summer took place in baseball’s steroid era. But, first, >especially for people younger than us, I want to just say this is what that felt like, to be in the middle of that summer.

It’s a treatment that not everybody may be on board with, but one needn’t look too hard elsewhere to find somebody willing to shake their finger and scowl at the pair. McGwire is no longer on the BBWAA’s annual Hall of Fame ballot, having topped out at a meager 23.6% during his 10-year run, but Sosa is, and the annual reminders of why 90-something percent of the voters aren’t including him on their ballots despite his 609 career home runs and the thrills he provided along the way are a dime a dozen. While various people interviewed for the documentary, including McGwire (Sosa, not so much), express their regrets, the movie also reminds us that commissioner Bud Selig, union leader Donald Fehr, and the baseball industry in general cheered the two sluggers’ accomplishments while becoming engrossed in the chase.

Most of the documentary’s interviewees were directly involved with the race in some way, either as players, coaches, managers, executives, club employees, family members, and media. Yours truly is one of the few latecomers interviewed; I was merely a fan circa 1998, and while my regular attendance at Yankees games had more to do with the career change that would bring me into baseball writing a few years later, I was certainly following the race, and have written a considerable amount about both players as well as PEDs and home run totals in the past two decades. I get a few chances to speak my piece and provide some historical perspective, as does Effectively Wild’s Ben Lindbergh.

I viewed a screener of Long Gone Summer on Thursday and will have a fuller write-up of my reflections regarding the movie on Monday. Here’s the trailer:

If I’m reading the schedule correctly, Long Gone Summer will re-air at midnight ET that Sunday night/Monday morning, then at 7 pm ET Monday, and midnight on Friday/Saturday. It will also be available on demand via the ESPN app for Apple, Roku, and other platforms.

06 Feb 03:50

Comic for 2020.02.05

New Cyanide and Happiness Comic
06 Dec 02:24

Roger Federer Becomes First Ever Living Person Celebrated on Swiss Coins

by twistedsifter

roger federer becomes first ever living person celebrated on swiss coin 2 Roger Federer Becomes First Ever Living Person Celebrated on Swiss Coins

 

Tennis legend and Swiss icon, Roger Federer, has become the first living person to ever be celebrated on Swiss coins. The country’s federal mint, Swissmint, will release a 20 Swiss francs silver commemorative coin in January and plans to add a Federer SFr50 gold coin in May.

Demand for the commemorative coin has been colossal, and people signing up to pre-order the first batch crashed the site this week. “We had 2.5 million clicks. It was too much for the shop to handle,” said Swissmint CEO Marius Haldimann.

According to the ATP:

Of the 35,000 20-franc silver coins offered in the pre-sale window, 15,000 have been snapped up. The remaining 20,000 coins from the initial run are expected to sell quickly when the website returns to full functionality. An additional 40,000 will be released in May, when a 50-franc gold coin will also be released.

 

Designed by Italian engraver Remo Mascherini, the coin features an image of the 20-time Grand Slam winner based on what appears to be a wire photo from Federer’s winning turn against Greece’s Stefanos Tsitsipas at the 2018 Australian Open. [source]

 

roger federer becomes first ever living person celebrated on swiss coin 1 Roger Federer Becomes First Ever Living Person Celebrated on Swiss Coins

 

 

29 Nov 17:03

In The Name Of Science

by Team Awkward

“My son’s assignment was finding a field of science to research and draw a picture of. My son chose proctology.”

(submitted by Nicole)

The post In The Name Of Science appeared first on AwkwardFamilyPhotos.com.

08 Nov 02:19

Building a Lattice Boltzmann–Based Wind Tunnel with the Wolfram Language

by Paritosh Mokhasi

My student days learning fluid dynamics were all about studying complicated equations and various methods of simplifying and manipulating these equations to get some kind of a result. Unfortunately, this left very little to the imagination when it came to getting an intuitive feel for how a fluid would behave in different situations. When I took my first experimental fluid dynamics course, I got to see how one would use different visualization techniques to understand qualitatively the behavior of the flow. These visualizations gave me a way of creatively looking at a flow, and, as an added bonus, they looked stunning. All these experiments and visualizations were being carried out inside a wind tunnel.

Building a Lattice Boltzmann–Based Wind Tunnel with the Wolfram Language

Creating Our Computational Wind Tunnel

Wind tunnels are devices used by experimental fluid dynamic researchers to study the effect of an object as it moves through air (or any other fluid). Since the object itself cannot move inside this tunnel, a controlled stream of air, generally produced by a powerful fan, is generated and the object is placed in the path of this stream of air. This produces the same effect as the object moving through stationary air. Such experiments are quite useful in understanding the aerodynamics of an object.

There are different kinds of wind tunnels. The simplest wind tunnel is a hollow pipe, or rectangular box. One end of this tunnel is fitted with a fan, while the other end is open. Such a tunnel is called an open-return wind tunnel. Experimentally, they are not the most efficient or reliable, but would work well if one were to build a computational wind tunnel—which is what we aim to do in this blog post. Here is the basic schematic of the wind tunnel that we will develop:

Basic schematic of the wind tunnel

Our wind tunnel will be a 2D wind tunnel. Fluid will enter the tunnel from the left and leave from the right. The top and bottom are solid walls. I should point out that since this is a computational wind tunnel, we are afforded great flexibility in choosing what kinds of boundaries it can possess. For example, we could make it so that the left, right and bottom walls do not move but the top wall does. We would then have the popular case of flow in a box.

When one starts thinking of computational fluid dynamics, our thoughts invariably jump to the famous Navier–Stokes equations. These equations are the governing equations that dictate the behavior of fluid flow. At this point, it might seem like we are going to use the Navier–Stokes equations to help us build a wind tunnel. But as it turns out, there are other methods to study the behavior of fluid flow without solving the Navier–Stokes equations. One of those methods is called the lattice Boltzmann method (LBM).

If we were to use the Navier–Stokes equations, we would be dealing with a complicated system of partial differential equations (PDEs). To solve these numerically, we would have to employ various techniques to discretize the derivatives. Once the discretization is done, we are left with a massive system of nonlinear algebraic equations that has to be solved. This is computationally exhausting! Using the alternative approach of the LBM, we completely bypass this traditional approach. There are no systems of equations to solve in the LBM. Also, a lot of the operations (which I will describe later) are completely local operations. This makes the LBM a highly parallel method.

In a very simplistic framework, we can think of the LBM as a “bottom-up” approach. In this approach, we perform the simulations in the microscopic, or “lattice,” domain. Imagine you have a physical or macroscopic domain. If we were to zoom in on a single point in this macroscopic domain, there would be a number of particles that are interacting with each other based on some “rule” about their interaction:

Particle interaction

For example, if two particles hit each other, how would they react or bounce off each other? These particles follow some discrete rule. Now, if we were to let these particles evolve over time based on these rules (you can see how this is closely related to cellular automata) and take averages, then these averages could be used to describe certain macroscopic quantities. As an example, the HPP model (named after Hardy, Pomeau and de Pazzis) we saw in the previous figure can be used to simulate the diffusion of gases.

Though this discrete approach sounds enticing (and researchers in the mid-1970s did try out its feasibility), it has a number of drawbacks. One of the major issues was the statistical noise in the final result. However, it is from these principles and attempts to overcome the drawbacks that the LBM emerged. A web search on theoretical aspects of this method will reveal many links to derivations and the final equations. In this blog, rather than focus on the theoretical aspect, I would like to focus on the final underlying mechanism through which the lattice Boltzmann simulations are performed. So I will only touch on the final equations we will need to develop our wind tunnel. First, assume that the density and the velocities are described by the following equations:

Density and velocity equations

… where the fi are called distribution functions and ex, i, ey, i are discrete velocities and have the following values:

Discrete velocity values

The computation of the density and velocities can be done as follows:

ex = {1, 1, 0, -1, -1, -1, 0, 1, 0};
&#10005
ex = {1, 1, 0, -1, -1, -1, 0, 1, 0};
	ey = {0, 1, 1, 1, 0, -1, -1, -1, 0};
	LBMDensityAndVelocity[f_] := Block[{rho, u, v},
	   rho = Total[f, {2}];
	   u = (f.ex)/rho;
	   v = (f.ey)/rho;
	   {rho, u, v}
	   ];

These nine discrete velocities are associated with their respective distribution functions fi and can be visualized as follows:

Nine discrete velocities

At each (discrete) point in the lattice domain, there will exist nine of these distribution functions. A model that uses these nine discrete velocities and functions fi is called the D2Q9 model. If the distribution functions fi are known, then the velocity field is known. Since the velocity field evolves over space and time, we must expect these distribution functions to also evolve over space and time. The equation governing the spatio-temporal evolution of fi is given by:

The equation governing the spatio-temporal evolution

The term Ωi is a complicated “collision” term that basically dictates how the various fi interact with each other. Using a number of simplifications and approximations, this equation is reduced to the following:

Reduced equation

… where fieq is called the equilibrium distribution function, τ is called the relaxation parameter and δtLBM = 1 is the time step in the lattice Boltzmann domain. This approximation is called the BGK approximation, and the resulting model is called the lattice BGK model. The detailed definitions of these two terms will be provided later. The spatial and temporal evolutions of these functions are done in two steps: (a) the streaming step; and (b) the collision step.

Since δtLBM = 1 and the are all 1 or 0, the streaming step is given by:

Streaming step

To visualize the streaming step, imagine the discretized domain. On each grid point of this domain are nine of these distribution functions (as shown in the following figure). Note the various colors for each of the grid points. The length of each arrow indicates the magnitude of the respective distribution functions:

The discretized domain

Based on the mathematical formulation, each distribution will be streamed in the respective directions as follows:

Each distribution streamed in the respective directions

Note the colors and where they ended up. It would be helpful to just focus on the center point. Before the streaming step, the arrows (which represent the different fi) are all green. After the streaming step, notice where the green arrows land on the surrounding grid points. This, in short, is the streaming step. In Mathematica, this step is done very easily using the built-in functions RotateLeft or RotateRight. Here, we will be using RotateRight:

right = {0, 1};
&#10005
right = {0, 1};
	left = {0, -1};
	bottom = {1, 0};
	top = {-1, 0};
	none = {0, 0};
	streamDir = {right, top + right, top, top + left, left, bottom + left,
	    bottom, bottom + right, none};
	LBMStream[f_] :=
	  Transpose[
	   MapThread[Flatten[RotateRight[#1, #2]] &, {f, streamDir}]];

When one does the streaming, special care has to be taken to address the boundaries of the wind tunnel. When the streaming is done, there are certain fi that become unknown at the edges and corners of the domain. The schematic (shown in the following figure) shows which fi are unknown for their respective edges and corners. The unknowns are represented by dashed arrows:

Unknowns

To understand how the unknown fi are computed, let us consider the top wall of the wind tunnel. For this edge, f6, f7, f8 are unknowns. We let:

Top wall of the wind tunnel

is a two-dimensional unknown vector, and fieq are the equilibrium distribution functions and are defined as:

Equilibrium distribution functions

The details of this approach are given in the paper by Ho, Chang, Lin and Lin. This operation (which is actually a highly parallelizable operation) can be done efficiently in Mathematica as:

wts = {1/9, 1/36, 1/9, 1/36, 1/9, 1/36, 1/9, 1/36, 4/9};
&#10005
wts = {1/9, 1/36, 1/9, 1/36, 1/9, 1/36, 1/9, 1/36, 4/9};
	LBMEquilibriumDistributions[rho_, u_, v_] :=
	  Module[{uv, euv, velMag},
	   uv = Transpose[{u, v}];
	   euv = uv.{ex, ey};
	   velMag = Transpose[ConstantArray[Total[uv^2, {2}], 9]];
	   (Transpose[{rho}].{wts})*(1 + 3*euv + (9/2)*euv^2 - (3/2)*velMag)
	   ];

Notice that fieq are completely defined by the velocity and density. Substituting this into the equations for velocity and density, we get:

Velocity and density equations

… where UBC, VBC are the velocities specified by the user for the boundary. These resulting equations are linear. There are three unknowns {ρ, QxQy}, and there are three equations. These systems can easily be solved using LinearSolve or Solve. The same procedure can be used for all the edges and corners. Since this is a small 3 × 3 system, we can precompute the expressions for the missing distributions symbolically. So for the top wall, we would have:

Clear[f, feq, rho, ubc, vbc];
&#10005
Clear[f, feq, rho, ubc, vbc];
	feq = EquilibriumDistributions[{rho}, {ubc}, {vbc}][[1]];
	eqn1 = rho*ubc == Sum[ex[[i]]*f[i], {i, 5}] + Sum[ex[[i]]*(feq[[i]] +
	        wts[[i]]*(ex[[i]]*qx + ey[[i]]*qy)), {i, 6, 8}] + ex[[9]]*f[9];
	eqn2 = rho*vbc == Sum[ey[[i]]*f[i], {i, 5}] + Sum[ey[[i]]*(feq[[i]] +
	        wts[[i]]*(ey[[i]]*qx + ey[[i]]*qy)), {i, 6, 8}] + ey[[9]]*f[9];
	eqn3 = rho == Sum[f[i], {i, 5}] + Sum[(feq[[i]] +
	       wts[[i]]*(ey[[i]]*qx + ey[[i]]*qy)), {i, 6, 8}] + f[9];
	res = FullSimplify[First[Solve[{eqn1, eqn2, eqn3}, {rho, qx, qy}]]]

Once {ρ, QxQy} are known, we then substitute the values and obtain the unknowns:

FullSimplify
&#10005
FullSimplify[Table[(feq[[i]] + wts[[i]]*(ey[[i]]*qx
	        + ey[[i]]*qy)), {i, 6, 8}] /. res]

When dealing with outflow boundary conditions, we are essentially trying to impose a 0-gradient condition across the boundary, i.e. . The simplest thing one can do is to use the velocity from one grid behind and use that to compute fi at the boundaries. However, that leads to inaccurate and, in many cases, unstable results.

Consider the right wall as shown in the following figure:

Right wall

After the streaming step, the distributions f4, f5, f6 are unknown. To impose an outflow condition on the distribution functions fi, we use the following relation:

Imposing an outflow condition

… where is the velocity from the previous time step at grid points (jk), j = 1, 2, …, M and fi,N(t) are the distributions at the previous time step. The scalar term u* must be positive. A good candidate is the normal velocity at the boundary, so for the left and right walls , and for the top and bottom walls it would be . If u* comes out to be 0, it can be replaced by the characteristic lattice velocity. It is important to note that the unknown distributions change depending on which wall has the outflow condition. For example, if it is the top wall, then f6, f7, f8 are unknown. Details of this approach for outflow treatment can be found in the paper by Yang.

A second approach to imposing outflow condition would be to simply do the following:

Second approach to imposing outflow condition

This method is much simpler than the first one, and is based on applying a second-order backward difference formula to each of the distribution functions.

Once the streaming is done and boundary conditions are imposed, a “collision” is performed (recall the rule-based approach). This step is basically figuring out how the ensemble-averaged particles are going to interact with each other. This step is a bit complicated, but using the BKG approximation, the collision step can be written as:

The collision step

… where are the density and velocities computed after the streaming step and boundary-adjustment step. The term vLBM is the kinematic viscosity in the lattice Boltzmann domain, and τ is the relaxation parameter. This term is quite important and will be discussed in a bit.

This translates to two very simple lines of code in Mathematica:

LBMCollide
&#10005
LBMCollide[f_, u_, v_, rho_, tau_] := Block[{feq},
	   feq = EquilibriumDistributions[rho, u, v];
	   f + (feq - f)/tau
	   ];

From a visual standpoint, after the collision, the distribution functions are adjusted based on the previous formula, and we get new distributions:

New distributions

Notice the change in the lengths of the arrows. That’s it! That is all it takes to run a lattice Boltzmann simulation.

Bringing Simulation to Reality with the Reynolds Number

So how is something like this supposed to simulate the fabled Navier–Stokes equations? To answer that question, we would have to go into a lot of math and multiscale analysis, but in short, the form of feq dictates the macroscopic equations that the LBM simulates. So it is actually possible to simulate a whole bunch of PDEs by using the appropriate equilibrium functions. Once again, I will leave it to the reader to go out into the internet world and get a sample of the remarkable things people simulate using the LBM.

Having seen the basic mechanism of the LBM, the obvious next question is: how do the simulations that are performed in the lattice-based system translate to the physical world? This is done by matching the non-dimensional parameters of the lattice system and the physical system. In our case, it is a single non-dimensional parameter: the Reynolds number. The Reynolds number (Rey) is defined as:

Reynolds number

… where Uphy is a characteristic velocity, Lphy is the characteristic length and vphy is the kinematic viscosity in the physical domain. In order to simulate the flow, the user is expected to specify the Reynolds number, the characteristic velocity and the characteristic length. From these three pieces of information, the kinematic viscosity is determined, and subsequently the relaxation parameter τ is determined. Using these pieces of information and the underlying equations associated with the LBM, the internal parameters of the simulation are computed.

The characteristic lattice velocity ULBM must never exceed (this number is computed to be the speed of sound in the lattice domain). The lattice velocity must remain significantly below this value for it to properly simulate incompressibility. In general, the lattice velocity is taken to be ULBM = 0.1. Similarly, the characteristic lattice length LLBM represents the number of points used in the lattice domain to represent the characteristic length in the physical domain. LLBM would be an integer quantity, and is typically user defined.

Let us look at an example to solidify how to relate the lattice simulation to the physical simulation. Let us assume that Rey = 100, Uphy = 1, Lphy = 1 and the physical dimensions of the wind tunnel are Lw = 15 LphyHw = 5Lphy. For the lattice domain, we assume ULBM = 0.1, LLBM = 10. The lattice wind tunnel dimensions are then LW,LBM = 15 × 10 = 150 and Hw,LBM = 50. The viscosity in the lattice domain is:

The viscosity in the lattice domain

This means that the relaxation parameter τ in the BGK model is:

The relaxation parameter

We now have all the quantities we need to run the simulation. If we now let each lattice time step in the simulation be δtLBM = 1, then we need to know what δtphy is. This is done by equating the viscosities and is given by:

Equating the viscosities

Therefore, if we want to run our simulation for t = 100 (time units), then in the lattice domain we would be iterating for steps.

The Reynolds number is a remarkable non-dimensional parameter. Rather than specify what fluid we are simulating at what velocities and at what dimension, the Reynolds number ties them all together. This means that if we have two systems of vastly different length, velocity scale and fluid medium, the two flows will behave the same as long as the Reynolds number remains the same.

Adding Objects to the Wind Tunnel

Let us now talk about how to introduce objects into the wind tunnel. One approach would be to discretize the objects into a jagged “steps” version of the original object and align it with the grid, then impose a no-slip boundary condition on each one of the step edges and corners:

Imposing a no-slip boundary condition

This is really not a good approach because it distorts the original object; if one needed a good representation of the object, they would have to use an extremely fine grid—making it computationally expensive and wasteful. Furthermore, sharp corners can often induce unwanted behavior in the flow. A second approach would be to immerse the object into the grid. The boundary of the object is discretized and is immersed into the domain:

Immersing the object into the grid

Discretizing the boundary of a specified object can easily be done using the built-in function BoundaryDiscretizeRegion. We can specify Disk or Circle to generate a set of points that represents the discretized version of the circular object:

bmr = BoundaryDiscretizeRegion
&#10005
bmr = BoundaryDiscretizeRegion[Disk[]];
	pts = MeshCoordinates[bmr];
	Show[bmr, Graphics[Point[pts]], ImageSize -> 250]

This method of discretizing the object and placing it inside the grid is called the immersed boundary method (IBM). Once the discretized object is immersed, the effect of that object on the flow needs to be modeled while making sure that the velocity conditions of the boundary are respected. One method of making the flow “feel the presence” of the immersed object is through a method called the direct forcing method. With this approach, the lattice BGK model is modified by adding a forcing term Fi to the evolution equation:

Evolution equation

… where is the corrective force induced on a grid point by the object boundaries. The equation for computing the velocity is now modified as:

Computing velocity

The corrective force is computed as:

Corrective force

… where are the boundary conditions of the object, are the velocities at the object boundaries if the object was not present, is an approximation to the delta function and are the positions of the boundary points of the object and are generally called Lagrangian boundary points. There are several choices that one can use. We will make use of the following compactly supported function:

Compactly supported function

This approximation is also called the mollifier kernel and can be defined using the Piecewise function:

Clear[deltaFun];
&#10005
Clear[deltaFun];
	deltaFun[r_?NumericQ] :=
	  Piecewise[{{(5 - 3 Abs[r] - Sqrt[1 - 3 (1 - Abs[r])^2])/6,
	     0.5 <= Abs[r] <= 1.5}, {(1 + Sqrt[1 - 3 r^2])/3,
	     Abs[r] <= 0.5}}];
Plot
&#10005
Plot[deltaFun[r], {r, -2, 2}, ImageSize -> 300]

The 2D function δ(xX, yY) is then given by:

2D function

… where dx, dy are scaling parameters. For Lagrangian point (XiYi), a δ function is specified. Here is what the delta function would look like if centered at (1/2,1/2) and scaled by 1:

Plot3D
&#10005
Plot3D[deltaFun[(x - 1/2)]*deltaFun[(y - 1/2)], {x, -2, 2}, {y, -2,
	  2}]

Let’s look at an example to demonstrate this immersed boundary concept, as well as how the function is constructed and how it is used for approximating a function. Assume that a circle is immersed in a rectangular domain:

Clear
&#10005
Clear[deltaFun];
	deltaFun[r_?NumericQ] :=
	  Piecewise[{{(5 - 3 Abs[r] - Sqrt[1 - 3 (1 - Abs[r])^2])/6,
	     0.5 <= Abs[r] <= 1.5}, {(1 + Sqrt[1 - 3 r^2])/3,
	     Abs[r] <= 0.5}}];
ng = N
&#10005
ng = N[Range[-2, 2, 4/30]];
	dx = dy = ng[[2]] - ng[[1]];
	grid = Flatten[Outer[List, ng, ng], 1];
	n = 30;
	bpts = N[CirclePoints[n]];
	Graphics[{{Red, PointSize[0.03], Point[bpts]}, {Blue, Point[grid]}},
	 ImageSize -> 300]

Each Lagrangian boundary point (in blue) influences the grid points (various intersections of the vertical and horizontal lines) within a certain radius, as shown in the following figure:

Lagrangian boundary points

To get the grid points that are influenced by each of the Lagrangian points, we make use of the Nearest function:

dr = 1.5 Sqrt
&#10005
dr = 1.5 Sqrt[dx^2 + dy^2];
	nf = Nearest[grid -> Automatic];
	influenceGridPtsIndex = nf[bpts, {Infinity, dr}];

The function δ(x – X(s), y – Y(s)) for discrete points essentially becomes a matrix:

gp = grid
&#10005
gp = grid[[#]] & /@ influenceGridPtsIndex;
	dd = MapThread[Transpose[#1] - #2 &, {gp, bpts}];
	dval = Table[
	   Map[deltaFun[#/dx] &, di[[1]]]*Map[deltaFun[#/dy] &, di[[2]]], {di,
	     dd}];
	t = Flatten[
	   MapThread[
	    Thread[Thread[{#1, #2}] -> #3] &, {Range[n],
	     influenceGridPtsIndex, dval}]];
	dMat = SparseArray[t, {n, Length[grid]}]

This matrix can now be used to compute the values at the Lagrangian points. For example, let us assume that the underlying grid has values on it defined by h(xy) = Sin(x + y), then the values at the Lagrangian points are computed as:

Values at Lagrangian points

... where D is the discretization of δ and h(xjyj) are the function values to be computed at (xjyj):

bptVal = dMat.Sin
&#10005
bptVal = dMat.Sin[Total[grid, {2}]];

We can compare the computed interpolated value to the actual values:

ListLinePlot
&#10005
ListLinePlot[{bptVal, Sin[Total[bpts, {2}]]}, ImageSize -> 300,
	 PlotStyle -> {Red, Black}, PlotLegends -> {"Computed", "Actual"}]

Similarly, the function values at the grid can be computed using the function values at the Lagrangian points as:

gridVal = Sin
&#10005
wts = IdentityMatrix[n];
	wts[[1, 1]] = wts[[-1, -1]] = 0.5;
	wts *= (Norm[bpts[[2]] - bpts[[1]]])/(dx*dy);
	gridVal = Sin[Total[bpts, {2}]].wts.dMat;
	hfun = Interpolation[Transpose[{grid, gridVal}]];
	Plot3D[hfun[x, y], {x, -2, 2}, {y, -2, 2}, PlotRange -> All]

As you can see, since the δ functions have compact support, only grid points that lie in their radius of influence get interpolated values. All grid points that are not in their support radius are 0.

So, to remind the readers, one single step in the lattice Boltzmann simulation consists of the following steps:


  1. Perform the streaming step.
  2. Adjust distribution functions at the boundaries.
  3. Perform the collision step.
  4. Compute the velocities .
  5. Compute the velocities at the Lagrangian boundary points of the objects.
  6. For each boundary point of the object, compute the corrective force needed to enforce the boundary conditions at that point.
  7. Compute the corrective forces at the lattice grid points using the forces obtained in step 6.
  8. Perform the streaming and collision steps, taking the forces into account.
  9. Calculate density and velocities.

This concludes all the necessary ingredients needed to run a wind tunnel simulation using the LBM in 2D.


Examples

To make this wind tunnel easy to use, I have put all these functions into a package called WindTunnel2DLBM. It contains a number of features and allows for easy setup of the problem by a user. I would recommend the interested user go through the package documentation for details. The focus here will be on the various examples and the flexibility our computational wind tunnel setup offers.

The first example is the flow in the wind tunnel. This is perhaps the simplest case. A schematic of the domain and its associated boundary conditions are shown here:

Schematic of the domain and its associated boundary conditions

In this case, there is only one length scale to the problem: the height of the wind tunnel. Therefore, that becomes our characteristic length scale. The characteristic velocity in this case is the maximum velocity coming from the inlet, which is set to 1. All that remains is to specify the Reynolds number at which the simulation is to be carried out. This is user defined as well. Let us take the length of the wind tunnel to be 6 units going from (0,6), and the height to be 2 units going from (-1,1). We now set up the simulation:

<< WindTunnel2DLBM`;
&#10005
<< WindTunnel2DLBM`;
Rey = 200;
&#10005
Rey = 200;
	charLen = 1;
	charVel = 1;
	ic  = Function[{x, y}, {0, 0}];
	state = WindTunnelInitialize[{Rey, charLen, charVel},
	  ic, {x, 0, 6}, {y, -1, 1}, t]

Notice that we did not provide any boundary condition information here. That is because the wind tunnel defaults to the flow in a channel case, and therefore all the boundary conditions are automatically imposed. All we have to specify are the characteristic information and the dimensions of the wind tunnel.

The simulation is performed using a fixed time step. The time step is internally computed and can be accessed from the following property:

state["TimeStep"]
&#10005
state["TimeStep"]

Let us now run the simulation for a period of 5 time units:

WindTunnelIterate
&#10005
WindTunnelIterate[state, 5];
	state

We can query the data at the final step of the simulation:

{usol, vsol}
&#10005
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];

The solution can be visualized in a variety of ways. For 2D simulation, a streamline plot can reveal some useful information. Let us visualize the streamline plot:

StreamPlot
&#10005
StreamPlot[{usol[x, y], vsol[x, y]}, {x, 0, 6}, {y, -1, 1},
	 AspectRatio -> Automatic, ImageSize -> 400, PlotRangePadding -> None]

Notice that the streamlines are not completely parallel (there is a bit of deviation). To see why, let us look at the profiles of the u component of the velocity field at various x locations:

Plot
&#10005
Plot[Evaluate[{usol[#, y] & /@ Range[0, 6, 2]}], {y, -1, 1},
	 ImageSize -> 300, PlotLegends -> Range[0, 6, 2],
	 PlotStyle -> {Black, Red, Blue, Green}]

This indicates that the velocities have a spatial dependence. For this particular problem, we should expect the flow to reach steady state, i.e. the flow should not vary with time. Let us run the simulation for an additional 20 time units and see the velocity profile:

WindTunnelIterate
&#10005
WindTunnelIterate[state, 20];
	{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
	Plot[Evaluate[{usol[#, y] & /@ Range[0, 6, 2]}], {y, -1, 1},
	 ImageSize -> 300, PlotLegends -> Range[0, 6, 2],
	 PlotStyle -> {Black, Red, Blue, Green}]

We see that the velocity profiles at the various x locations are almost the same as each other. This gives us an indication that the flow is indeed reaching steady state.

The Flow-in-a-Box Problem

Let us now look at the classic flow-in-a-box problem. Here’s the schematic of the domain and the boundary condition information:

Flow-in-a-box problem

The top wall moves with a horizontal velocity of 1 (length units/time units), while all the others are stationary, no-slip walls. The circles inside the box denote the kind of fluid behavior that might be expected. As the top wall moves, the wall drags the fluid below it, causing the fluid to rotate—that is the big circle in the schematic, and it represents a vortex. If there is sufficient strength in the main vortex, then we can expect it to start causing smaller, secondary vortices to form. Our hypothesis is that the strength of the vortex should be related to the Reynolds number. Let us see what happens by running the simulation at Reynolds number 100:

state = WindTunnelInitialize
&#10005
state = WindTunnelInitialize[{100, 1, 1},
	   Function[{x, y}, {0, 0}], {x, 0, 1}, {y, 0, 1}, t,
	   "CharacteristicLatticePoints" -> 60,
	   "TunnelBoundaryConditions" -> {"Left" -> "NoSlip",
	     "Right" -> "NoSlip", "Bottom" -> "NoSlip",
	     "Top" -> Function[{x, y, t}, {1, 0}]}];

We now iterate for 50 time units:

WindTunnelIterate
&#10005
WindTunnelIterate[state, 50];

Visualizing the result shows us that there is a primary vortex that forms near the middle, while a smaller, secondary vortex forms at the bottom right of the box:

{usol, vsol}
&#10005
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
	StreamPlot[{usol[x, y], vsol[x, y]}, {x, 0, 1}, {y, 0, 1},
	 AspectRatio -> Automatic, StreamPoints -> Fine,
	 PlotRangePadding -> None, ImageSize -> 300]

If the Reynolds number is ramped up, then these secondary vortices become stronger and larger, and additional vortices start developing in the corners. Let us look at the case when the Reynolds number is 1,000:

state = WindTunnelInitialize
&#10005
state = WindTunnelInitialize[{1000, 1, 1},
	   Function[{x, y}, {0, 0}], {x, 0, 1}, {y, 0, 1}, t,
	   "CharacteristicLatticePoints" -> 60,
	   "TunnelBoundaryConditions" -> {"Left" -> "NoSlip",
	     "Right" -> "NoSlip", "Bottom" -> "NoSlip",
	     "Top" -> Function[{x, y, t}, {1, 0}]}];

We will again iterate for 50 time units:

WindTunnelIterate
&#10005
WindTunnelIterate[state, 50];

Let us visualize the result:

{usol, vsol}
&#10005
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
	StreamPlot[{usol[x, y], vsol[x, y]}, {x, 0, 1}, {y, 0, 1},
	 AspectRatio -> Automatic, StreamPoints -> Fine,
	 PlotRangePadding -> None, ImageSize -> 300]

Notice that the primary vortex has moved closer to the center; from the looks of it, it’s strong enough to be able to form secondary vortices at the bottom left and bottom right of the domain.

Let us now see what happens if we do the simulation on a “tall” box rather than a square one. The boundary conditions remain the same, but the domain changes in the y direction:

state = WindTunnelInitialize
&#10005
state = WindTunnelInitialize[{1000, 1, 1},
	   Function[{x, y}, {0, 0}], {x, 0, 1}, {y, 0, 2}, t,
	   "CharacteristicLatticePoints" -> 60,
	   "TunnelBoundaryConditions" -> {"Left" -> "NoSlip",
	     "Right" -> "NoSlip", "Bottom" -> "NoSlip",
	     "Top" -> Function[{x, y, t}, {1, 0}]}];

Run the simulation and use ProgressIndicator to track the progress. This simulation will take a few minutes:

ProgressIndicator
&#10005
ProgressIndicator[Dynamic[state["CurrentTime"]], {0, 50}]
	AbsoluteTiming[WindTunnelIterate[state, 50]]

Visualize the streamlines:

{usol, vsol}
&#10005
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
	StreamPlot[{usol[x, y], vsol[x, y]}, {x, 0, 1}, {y, 0, 2},
	 AspectRatio -> Automatic, StreamPoints -> Fine,
	 PlotRangePadding -> None, ImageSize -> Medium]

In the tall-box scenario, a primary vortex is developed near the top wall, and that vortex in turn creates another vortex below it. If that second vortex is strong enough, it will create vortices at the bottom corners of the box.

We can already see the flexibility our wind tunnel is providing us. Let us now put an object inside the wind tunnel and observe the behavior of the flow. For this example, use a circular object:

A circular object in the wind tunnel

This is the same as flow in a channel (our first example), but with an object placed in the channel. Notice now that there are are two length scales d and H. The choice of the characteristic length, though arbitrary, must tie back to some aspect of the physics of the flow. In this example, if the size of the object was to be increased or decreased, then the flow pattern behind it would be expected to change. Therefore, the natural choice is to use d as the characteristic length.

Let us place the cylinder at (3,0) in the domain. Let the size of the cylinder be 1 length unit. Therefore, the characteristic scale will be 1. Let the domain size be (0, 15) × (–2, 2). The object is specified as a ParametricRegion:

Remove[state];
&#10005
Remove[state];
	state = WindTunnelInitialize[{200, 1, 1},
	  Function[{x, y}, {0, 0}], {x, 0, 15}, {y, -2, 2}, t,
	  "CharacteristicLatticePoints" -> 15,
	  "ObjectsInTunnel" -> {ParametricRegion[{3 + Cos[s]/2,
	      Sin[s]/2}, {{s, 0, 2 Pi}}]}]

It is a good idea to visualize the tunnel before starting the simulation, to make sure the object is in the correct position:

ListLinePlot
&#10005
ListLinePlot[state["ObjectsInTunnel"],
	 PlotRange -> {{0, 14}, {-2, 2}}, AspectRatio -> Automatic,
	 Axes -> False, Frame -> True, ImageSize -> Medium,
	 PlotLabel ->
	  StringForm["GridPoints: ``", Reverse@state["GridPoints"]]]

Let us simulate the flow for 10 time units:

WindTunnelIterate
&#10005
WindTunnelIterate[state, 10];
	{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];

	Rasterize@
	 Show[LineIntegralConvolutionPlot[{{usol[x, y], vsol[x, y]}, {"noise",
	      300, 400}}, {x, 0, 15}, {y, -2, 2}, AspectRatio -> Automatic,
	   ImageSize -> Medium, PlotRangePadding -> None,
	   LineIntegralConvolutionScale -> 2, ColorFunction -> "RoseColors"],
	  ListLinePlot[state["ObjectsInTunnel"],
	   PlotStyle -> {{Thickness[0.005], Black}}]]

There are two things to notice here: the symmetric pair of vortices behind the cylinder and the flow inside the cylinder. A close-up reveals that there is some flow pattern inside the cylinder as well:

Show
&#10005
Show[StreamPlot[{usol[x, y], vsol[x, y]}, {x, 2.4, 3.6}, {y, -1/2,
	   1/2}, AspectRatio -> Automatic, ImageSize -> Medium,
	  PlotRangePadding -> None],
	 ListLinePlot[state["ObjectsInTunnel"],
	  PlotStyle -> {{Thickness[0.01], Blue}}]]

This behavior is because we are making use of the IBM. As mentioned earlier, the IBM computes a set of forces to be applied on the grid points such that the velocity at the surface representing the surface is 0. It does not specify what needs to happen inside the cylinder. Therefore, being an incompressible flow, there exists a flow pattern inside the cylinder as well. The important thing is that the velocities at the boundaries of the object are 0 (no-slip).

Let us now continue to iterate for 30 time units and see what happens to the pattern behind the cylinder. Sometimes, it can be helpful to look at another variable called vorticity to get a better understanding of what is happening:

WindTunnelIterate
&#10005
WindTunnelIterate[state, 30];
	{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];

Set up the color scheme for the contours:

cc = N@Range
&#10005
cc = N@Range[-3, 3, 4/100];
	cc = DeleteCases[cc, x_ /; -0.4 <= x <= 0.4];
	cname = "VisibleSpectrum";
	cdata = ColorData[cname];
	crange = ColorData[cname, "Range"];
	cMinMax = {Min[cc], Max[cc]};
	colors = cdata[Rescale[#, cMinMax, crange]] & /@ cc;

Visualize the vorticity:

Remove[state];
&#10005
Remove[vort];
	vort = D[usol[x, y], y] - D[vsol[x, y], x];
	Rasterize@Show[ContourPlot[vort, {x, 0, 15},
	   {y, -2, 2}, AspectRatio -> Automatic, ImageSize -> 500,
	   Contours -> cc, ContourShading -> None, ContourStyle -> colors,
	   PlotRange -> {{0, 15}, {-2, 2}, All}],
	  Graphics[Polygon[state["ObjectsInTunnel"]]]]

We now notice that the symmetric pattern has been destroyed and is replaced by this “wavy” behavior; the vorticity clearly shows the wavy behavior. What we notice here is called an instability in the wake of the cylinder. This instability continues to amplify, and eventually vortices start forming behind the cylinder. This phenomenon is called “vortex shedding.” There is a shear layer generated at the surface of the cylinder that gets carried downstream.

This vortex shedding is also dependent on the Reynolds number. For small enough numbers, we don’t get any shedding. However, at around 100–150, the shedding is observed. To properly observe this phenomena, it would be good to see the time evolution of this flow. As a first step, set up the problem by defining the characteristic terms and the objects in the tunnel:

state = WindTunnelInitialize
&#10005
state = WindTunnelInitialize[{200, 1, 1},
	   Function[{x, y}, {0, 0}], {x, 0, 15}, {y, -2, 2}, t,
	   "CharacteristicLatticePoints" -> 15,
	   "ObjectsInTunnel" -> {ParametricRegion[{3 + Cos[s]/2,
	       Sin[s]/2}, {{s, 0, 2 Pi}}]}];

To produce a time evolution of the vorticity, we will extract the solution at each time unit and generate a series of plots:

cc = N@Range
&#10005
cc = N@Range[-5, 5, 10/200];
	cc = DeleteCases[cc, x_ /; -0.5 <= x <= 0.5];
	cname = "VisibleSpectrum";
	cdata = ColorData[cname];
	crange = ColorData[cname, "Range"];
	cMinMax = {Min[cc], Max[cc]};
	colors = cdata[Rescale[#, cMinMax, crange]] & /@ cc;

	res = Table[
	   WindTunnelIterate[state, t];
	   {usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
	   vort = D[usol[x, y], y] - D[vsol[x, y], x];
	   plot =
	    Show[ContourPlot[vort, {x, 0, 15}, {y, -2, 2},
	      AspectRatio -> Automatic, ImageSize -> Medium, Contours -> cc,
	      ContourShading -> None, ContourStyle -> colors,
	      PlotRange -> {{0, 15}, {-2, 2}, All}],
	     Graphics[Point /@ state["ObjectsInTunnel"]]];
	   Rasterize[plot]
	   , {t, 0, 50, 1}];

Running the simulation clearly shows two vortices forming in the back of the cylinder with the shear layer slowly getting perturbed, which then increases in amplitude before finally breaking into a vortex shedding:

ListAnimate
&#10005
ListAnimate[res, DefaultDuration -> 10, AnimationRunning -> False]

Observing Disturbances Caused by a Moving Object

For our next example, we will exploit the immersed boundary treatment and “immerse” a circular tank inside our wind tunnel. The boundary of the tank will have 0-velocities. Inside this tank, we will immerse an elliptical object. This object is placed near the tank wall and follows the tank boundary in a circular path. The flexibility of the lattice Boltzmann method with immersed boundary allows us great flexibility with moving objects. The objective is to study what kind of disturbances develop when this object moves through a still fluid.

Set up the problem by defining the characteristic terms. In this case, the simulation will be performed at a Reynolds number of 400. The characteristic length and velocity are specified as unity. There are two objects in the tunnel. The first object is the large circular tank that is held stationary; the second is the elliptical object that will be moving inside this tank:

Remove[state];
&#10005
Remove[state];
	state = WindTunnelInitialize[{400, 1, 1},
	  Function[{x, y}, {0, 0}], {x, -2.2, 2.2}, {y, -2.2, 2.2}, t,
	  "CharacteristicLatticePoints" -> 25,
	  "TunnelBoundaryConditions" -> {"Left" -> "NoSlip",
	    "Right" -> "NoSlip", "Top" -> "NoSlip", "Bottom" -> "NoSlip"},
	  "ObjectsInTunnel" -> {{ParametricRegion[{1.3 + 0.2*Sin[s],
	       0.5*Cos[s]}, {{s, 0, 2 Pi}}], Function[{xb, yb, t}, {-yb, xb}]},
	    {ParametricRegion[{2*Sin[s], 2*Cos[s]}, {{s, 0, 2 Pi}}]}}]

As always, it is a good idea to check the geometry of the underlying problem. We can do that by simply extracting the discretized object when doing the initialization; we can see that everything is where it is supposed to be:

Graphics
&#10005
Graphics[Map[Line, state["ObjectsInTunnel"]], Frame -> True,
	 ImageSize -> Small]

As we did before, we will be looking at the vorticity contours of the flow. Let us first define the color scheme and the levels of contours that will be plotted:

cc = N@Range
&#10005
cc = N@Range[-7, 7, 14/100];
	cc = DeleteCases[cc, x_ /; -0.1 <= x <= 0.1];
	cname = "VisibleSpectrum";
	cdata = ColorData[cname];
	crange = ColorData[cname, "Range"];
	cMinMax = {Min[cc], Max[cc]};
	colors = cdata[Rescale[#, cMinMax, crange]] & /@ cc;

The simulation is now run for 60 time units:

oreg = RegionPlot
&#10005
oreg = RegionPlot[x^2 + y^2 >= 2^2, {x, -2.2, 2.2}, {y, -2.2, 2.2},
	   PlotStyle -> Black];
	AbsoluteTiming[res = Table[
	    WindTunnelIterate[state, tt];
	    {usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
	    vort = D[usol[x, y], y] - D[vsol[x, y], x];
	    Rasterize@
	     Show[ContourPlot[vort, {x, -2, 2}, {y, -2, 2},
	       AspectRatio -> Automatic, ImageSize -> 300, Contours -> cc,
	       ContourShading -> None, ContourStyle -> colors,
	       PlotRange -> {{-2, 2}, {-2, 2}, All}],
	      Graphics[Polygon[First[state["ObjectsInTunnel"]]]], oreg], {tt,
	     0, 60, 1/2}];]

Running the time evolution of the fluid disturbance shows that a very beautiful geometric pattern is formed within the tank initially before settling down to a more uniform circular disturbance:

ListAnimate
&#10005
ListAnimate[res, DefaultDuration -> 10, AnimationRunning -> False,
	 ImageSize -> Automatic]

Flow in a Pipe with Bends and Obstacles

For the sake of curiosity (and fun), what kind of flow pattern would we expect for the following geometry?

Flow in a pipe

Fluid enters the pipe from the right end, moves up the pipe and then gets discharged from the left end. What would be the effect of that stopper at the right end? How will it impact the discharge?

This is surprisingly easy to figure out with our current setup. Again, we just immerse our pipe and the obstacle within it into our wind tunnel. The left, right and top boundaries of the wind tunnel are given a 0-velocity condition. The bottom boundary is given an outflow condition from –1 ≤ x ≤ –0.7, a 0-velocity condition from –0.7 ≤ x ≤ 0.7 and a parabolic velocity profile from 0.7 ≤ x ≤ 1:

Remove[state];
&#10005
Remove[state];
	inletVel = Fit[{{7/10, 0}, {17/20, 1}, {1, 0}}, {1, x, x^2}, x];
	state = WindTunnelInitialize[{500, 0.3, 1},
	  Function[{x, y}, {0, 0}], {x, -1.1, 1.1}, {y, 0, 1.1}, t,
	  "CharacteristicLatticePoints" -> 20,
	  "TunnelBoundaryConditions" -> {"Left" -> "NoSlip",
	    "Right" -> "NoSlip", "Top" -> "NoSlip",
	    "Bottom" ->
	     Function @@
	      List[{x, y, t},
	       If @@ List[0.7 <= x <= 1., {0, inletVel},
	         If[-1 <= x <= -0.7, "Outflow", {0, 0}]]]},
	  "ObjectsInTunnel" -> {ImplicitRegion[
	     0.7 <= (x^4 + y^4)^(1/4) <= 1, {{x, -1, 1}, {y, -0.2, 1}}],
	    ParametricRegion[{0.22 + t, t - 0.2}, {{t, 0.55, 0.7}}]}]

Let us run it for 40 time units:

ProgressIndicator
&#10005
ProgressIndicator[Dynamic[state["CurrentTime"]], {0, 40}]
	AbsoluteTiming[WindTunnelIterate[state, 40]]

Let us plot the vorticity:

{usol, vsol}
&#10005
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
	vort = D[usol[x, y], y] - D[vsol[x, y], x];
	cc = N@Range[-20, 20, 40/50];
	cc = DeleteCases[cc, x_ /; -0.1 <= x <= 0.1];
	cname = "VisibleSpectrum";
	cdata = ColorData[cname];
	crange = ColorData[cname, "Range"];
	cMinMax = {Min[cc], Max[cc]};
	colors = cdata[Rescale[#, cMinMax, crange]] & /@ cc;
	Rasterize@
	 Show[ContourPlot[vort, {x, -1, 1}, {y, 0, 1},
	   AspectRatio -> Automatic, ImageSize -> Medium, Contours -> cc,
	   ContourShading -> None, ContourStyle -> colors,
	   PlotRange -> {{-1, 1}, {0, 1}, All},
	   RegionFunction -> Function[{x, y}, 0.7 <= (x^4 + y^4)^(1/4) <= 1]]
	  , Graphics[Point /@ state["ObjectsInTunnel"]]]

We see that the obstacle/stopper introduces a vortex shedding, which travels down the pipe. Let us look at the velocities at y = 0:

Plot
&#10005
Plot[vsol[x, 0], {x, -1, 1}, PlotRange -> {{-1, 1}, {-1, 1}},
	 ImageSize -> Medium]

If we compare the velocity profile between the outlet (at the left) and inlet (at the right), we see that the outlet velocity is almost half of the inlet. This gives us compelling evidence that the stopper has caused a reduction in fluid discharge from the left end of the pipe, which is to be expected.

Simulating the Flow over an Airfoil

As a final example, let us look at the flow around an airfoil. An airfoil basically represents the cross-section of an airplane wing, and is the fundamental thing that actually allows an airplane to lift off the ground. There are many types of airfoils, but we will focus on a simple one where the airfoil is described by the parametric equation The parameter a controls how thick the airfoil should be, and the parameter b controls the curvature of the airfoil:

Clear[deltaFun];
&#10005
Clear[mat, a, b, t, AOA];
	Manipulate[
	 mat = {{Cos[AOA Degree], -Sin[AOA Degree]}, {Sin[AOA Degree],
	    Cos[AOA Degree]}};
	 ParametricPlot[
	  mat.{t^2, 0.2 (t - t^3 + (t^2 - t^4)/b)/a}, {t, -1, 1},
	  AspectRatio -> Automatic, ImageSize -> Medium,
	  PlotRange -> {{0, 1}, {-0.2, 0.5}}], {{a, 1}, 0.1, 10}, {{b, 0.9},
	  0.3, 10}, {{AOA, 0, "Angle of Attack"}, -20, 20}]

In order for the aircraft to get “lift,” i.e. be able to get off the ground, the top surface of the airfoil should have a pressure distribution that is lower than the bottom surface. This pressure difference causes the wing to lift upward (along with anything attached to it). This pressure difference is achieved by having wind blow over its surface at significantly high speeds. A second consideration is that the wing generally needs to be tilted or have an “angle of attack” to it. By doing this, we ensure greater lift. We will also give the airfoil a –10° angle of attack. The simulation will be run for a Reynolds number of 1,000. Now, I should point out that a Reynolds number of 1,000 is a rather small value. A typical Reynolds number for small aircraft is around 1 million. A full-scale simulation is just not possible on a laptop because of the large grid size. However, even at 1,000, we should be able to get a good understanding of the underlying dynamics. For this example, a uniform flow fill comes in from the left. The top and bottom tunnel boundaries are set to be periodic, and the right boundary is set to an outflow. The characteristic length here will be the thickness of the airfoil:

state = WindTunnelInitialize
&#10005
state = WindTunnelInitialize[{1000, 0.2, 1},
	  Function[{x, y}, {0, 0}], {x, -2, 6}, {y, -1., 1.}, t,
	  "CharacteristicLatticePoints" -> 20,
	  "CharacteristicLatticeVelocity" -> 0.05,
	  "TunnelBoundaryConditions" -> {"Left" ->
	     Function[{x, y, t}, {1, 0}], "Right" -> "Outflow",
	    "Top" -> "Periodic"},
	  "ObjectsInTunnel" -> {ParametricRegion[{{Cos[-10 Degree], -Sin[-10 \
	Degree]}, {Sin[-10 Degree], Cos[-10 Degree]}}.{t^2,
	       0.2 (t - t^3 + (t^2 - t^4)/0.9)/1}, {{t, -1, 1}}]}]

Before starting the simulation, extract the discretized object and check that it is in the appropriate location within the wind tunnel:

ListLinePlot
&#10005
ListLinePlot[state["ObjectsInTunnel"],
	 PlotRange -> Evaluate[state["Ranges"]], AspectRatio -> Automatic,
	 Axes -> False, Frame -> True, ImageSize -> 400,
	 PlotLabel ->
	  StringForm["GridPoints: ``", Reverse@state["GridPoints"]]]

Notice the large number of grid points. This is because we are allowing 20 lattice points to resolve the thin airfoil. We now run the simulation for 10 time units. This simulation takes a bit of time to finish 10 time units’ worth of simulation because: (a) the resolution (i.e. the number of grid points needed for running this simulation) is quite large (800×200); and (b) to complete the simulation, 20,000 iterations must be performed:

10/state["TimeStep"]
&#10005
10/state["TimeStep"]

Start the iteration process:

AbsoluteTiming
&#10005
AbsoluteTiming[WindTunnelIterate[state, 10]]

Let us first look at the vorticity plot:

{usol, vsol}
&#10005
{usol, vsol} = {"U", "V"} /. state[state["CurrentTime"]];
	vort = D[usol[x, y], y] - D[vsol[x, y], x];
	cc = N@Range[-15, 15, 30/60];
	cc = DeleteCases[cc, x_ /; -0.1 <= x <= 0.1];
	cname = "VisibleSpectrum";
	cdata = ColorData[cname];
	crange = ColorData[cname, "Range"];
	cMinMax = {Min[cc], Max[cc]};
	colors = cdata[Rescale[#, cMinMax, crange]] & /@ cc;
	Show[ContourPlot[vort, {x, -0.5, 5}, {y, -1, 1},
	  AspectRatio -> Automatic, ImageSize -> 500, Contours -> cc,
	  ContourShading -> None, ContourStyle -> colors,
	  PlotRange -> {{-0.5, 5}, {-1, 0.5}, All}]
	 , Graphics[{FaceForm[White], EdgeForm[Black],
	   Polygon[state["ObjectsInTunnel"][[1]]]}]]

Just as in the case of the bluff body, we are seeing vortex shedding. For the case of the airfoil, this is not really a desirable property. We ideally want the flow to hug the surface. When the flow separates (as you see on the top surface of the airfoil), the pressure drop is not achieved properly and the airfoil will be unable to generate lift.

Let us now look at the pressure. Rather than plotting the pressure, we will plot a non-dimensional parameter called the pressure coefficient, defined by Cp = 2(p – p)/(ρLBM U2LBM), where p is the pressure far upstream. We are interested in looking at the pressure at the object’s surface:

PressureCoefficient
&#10005
PressureCoefficient[x_?NumericQ,
	  y_?NumericQ] := (psol[x, y] - psol[-2, 0])/(0.5*
	    state["InternalVelocity"]^2)
pp = Apply
&#10005
							objs = state["ObjectsInTunnel"][[1]];
psol = "P" /. state[state["CurrentTime"]];
pp = Apply[psol, objs, 1];
	pp = (pp - psol[-2, 0])/(0.5*state["InternalVelocity"]^2);
	ListPlot[Transpose[{objs[[All, 1]], pp}], PlotRange -> All,
	 Axes -> False, Frame -> True,
	 FrameLabel -> {"x \[Rule]", "\!\(\*SubscriptBox[\(C\), \(p\)]\)"},
	 FrameStyle -> Directive[Black, 14], ImageSize -> Medium]

You will notice that there are two lines here. The lower line represents the pressure on the top surface, while the top line represents the pressure on the bottom surface. It is clear that despite some separation from the airfoil, we are getting some pressure differences. We can also plot the pressure contours and visualize them near the airfoil:

Show
&#10005
Show[Quiet@
	  ContourPlot[
	   PressureCoefficient[x, y], {x, -0.5, 1.5}, {y, -0.4, 0.4},
	   AspectRatio -> Automatic, PlotRangePadding -> None,
	   ColorFunction -> "TemperatureMap", Contours -> 40,
	   PlotLegends -> Automatic, PlotRange -> All, ImageSize -> Medium],
	 Graphics[{FaceForm[White], EdgeForm[Black],
	   Polygon[state["ObjectsInTunnel"][[1]]]}]]

If you look carefully at the color scheme, you will indeed see that the top-surface pressure is less than the bottom surface. So perhaps there is hope with this airfoil. The fluid-dynamic property that we have just explored is called the Bernoulli principle, which has applications in aviation (as we have seen here) and in fields such as automotive engineering.

This is just the start—there are many more examples you can try out! What we have discussed here is a good place to begin exploring this alternative approach to studying fluid dynamics problems and their implementation in Mathematica. The LBM combined with the IBM is a good tool for anyone interested in studying and analyzing fluid flows. With the help of Mathematica’s built-in functions, putting together the numerical wind tunnel is quite straightforward. The WindTunnel2DLBM package has helped me explore many fascinating concepts in the field of fluid dynamics (and make stunning visualizations). I hope you too will get inspired and dive into the exploration of fluid-flow phenomena.

Get full access to the latest Wolfram Language functionality with a Mathematica 12 or Wolfram|One trial.
11 Jun 20:58

Synopsis: Making the Perfect Crêpe

Cooking a flat, hole-free crêpe—a thin pancake popular in France and other European countries—is all in how you roll your wrist, according to predictions from a new model.


[Physics] Published Tue Jun 11, 2019

29 Sep 01:43

The Sun's Spectrum with its Missing Colors

It is still not known why the Sun's light is missing some colors. It is still not known why the Sun's light is missing some colors.


04 Jan 19:47

Meanwhile in Japan: An Exhibit of Chopstick Sleeve Art Left Behind at Restaurants

by twistedsifter

japanese tip yuki tatsumi 11 Meanwhile in Japan: An Exhibit of Chopstick Sleeve Art Left Behind at Restaurants

 

In Japan it is not customary to tip. So when former waiter, Yuki Tatsumi, would clear the tables he noticed that some patrons would leave origami made from the paper chopstick sleeves. He took this as a subtle and discreet way of showing appreciation, so he began to collect what he called, Japanese Tip.

What began as a personal collection/hobby in 2012, exploded when he started to reach out to restaurants and eaters across Japan. Soon he had thousands and now boasts over 13,000 in his collection, which he recently exhibited in Tokyo.

Although the exhibit is now closed, you can still keep up with the project on the official website and Facebook page.

[via Colossal]

 

JAPANESE TIP
Website | Facebook

 

 

japanese tip yuki tatsumi 10 Meanwhile in Japan: An Exhibit of Chopstick Sleeve Art Left Behind at Restaurants

JAPANESE TIP
Website | Facebook

 

japanese tip yuki tatsumi 3 Meanwhile in Japan: An Exhibit of Chopstick Sleeve Art Left Behind at Restaurants

JAPANESE TIP
Website | Facebook

 

japanese tip yuki tatsumi 5 Meanwhile in Japan: An Exhibit of Chopstick Sleeve Art Left Behind at Restaurants

JAPANESE TIP
Website | Facebook

 

japanese tip yuki tatsumi 7 Meanwhile in Japan: An Exhibit of Chopstick Sleeve Art Left Behind at Restaurants

JAPANESE TIP
Website | Facebook

 

japanese tip yuki tatsumi 4 Meanwhile in Japan: An Exhibit of Chopstick Sleeve Art Left Behind at Restaurants

JAPANESE TIP
Website | Facebook

 

japanese tip yuki tatsumi 6 Meanwhile in Japan: An Exhibit of Chopstick Sleeve Art Left Behind at Restaurants

JAPANESE TIP
Website | Facebook

 

japanese tip yuki tatsumi 8 Meanwhile in Japan: An Exhibit of Chopstick Sleeve Art Left Behind at Restaurants

JAPANESE TIP
Website | Facebook

 

japanese tip yuki tatsumi 9 Meanwhile in Japan: An Exhibit of Chopstick Sleeve Art Left Behind at Restaurants

JAPANESE TIP
Website | Facebook

 

japanese tip yuki tatsumi 1 Meanwhile in Japan: An Exhibit of Chopstick Sleeve Art Left Behind at Restaurants

JAPANESE TIP
Website | Facebook

 

japanese tip yuki tatsumi 2 Meanwhile in Japan: An Exhibit of Chopstick Sleeve Art Left Behind at Restaurants

JAPANESE TIP
Website | Facebook

 

japanese tip yuki tatsumi 12 Meanwhile in Japan: An Exhibit of Chopstick Sleeve Art Left Behind at Restaurants

JAPANESE TIP
Website | Facebook

 

20 Dec 01:21

Tim Lincecum Planning Showcase For MLB Teams

by Steve Adams

After sitting out the 2017 season, right-hander Tim Lincecum is working out with the trainers at Driveline Baseball and will showcase for interested teams “in the near future,” per an announcement from Driveline (on Twitter). Intrigue around Lincecum picked up earlier today when fellow Driveline client Adam Ottavino posted a picture of Lincecum on Instagram.

Lincecum looks to be in excellent physical condition, though it’s certainly best to temper expectations regarding his ability to contribute on the field. Lincecum didn’t pitch in the Majors, minors or in any other professional capacity in 2017, and his 2016 run with the Angels following 2015 hip surgery was an unmitigated disaster.

In Lincecum’s last run through MLB, he logged a 9.16 ERA with 7.5 K/9, 5.4 BB/9 and a staggering 2.58 HR/9 through 38 1/3 innings with the Halos. His average fastball in that time was just 87.7 mph — nearly seven full miles per hour off he 94.2 mph that he averaged as a 23-year-old rookie one decade ago.

Yahoo’s Jeff Passan tweets that Lincecum is “throwing hard again,” though he didn’t put a specific number on the two-time NL Cy Young winner’s velocity at present. Restored velocity, of course, could be a huge boon for Lincecum, who’ll turn 34 next June. Lincecum established himself as one of the game’s most dominant pitchers quickly after debuting and held that status from 2008-11 before beginning to decline in 2012. Perhaps not surprisingly, the first significant drop-off in performance for Lincecum came when his average fastball plummeted from 92.3 mph in 2011 to 90.4 mph in 2012.

Lincecum figures to be of some degree of interest to most clubs, as there’d be little reason not to at least have a scout or two watch a workout to see if there’s any indications of a potential return to form. It’d be unrealistic to hope for his 2008-09 dominance (2.55 ERA, 10.5 K/9, back-to-back Cy Young Awards), but a rejuvenated Lincecum could conceivably contribute to a big league rotation all the same.

Presumably, he’ll field predominantly minor league offers, though it’s possible that he drums up enough interest to secure a 40-man roster spot with a low base salary and a healthy pile of incentives based on appearances and/or innings pitched. By the time Opening Day rolls around, it will have been six full seasons since “The Freak” was an above-average contributor in a Major League rotation, though recent resurgences from the likes of Scott Kazmir and Rich Hill (among others) serve as reminders that it’s virtually never too late to rule out a comeback — especially for a player that won’t turn 34 until next summer.

08 Dec 01:25

Picture of the Day: Silhouettes and Sunsets

by twistedsifter

cheetah at sunset Picture of the Day: Silhouettes and Sunsets

 

A South African cheetah (Acinonyx jubatus jubatus) is silhouetted against a fiery sunset, in the Okavango, in Botswana. This beautiful photo was a finalist in the Wikimedia Commons Picture of the Year 2014.

 

 

picture of the day button Picture of the Day: Silhouettes and Sunsets

twistedsifter on facebook Picture of the Day: Silhouettes and Sunsets

 

07 Dec 21:32

"Footprints In The Mall" by Roy Moore

by Ted McCagg

Footprints

05 Dec 02:42

The shape of the Milky Way: Mapping the far side of the galaxy

by Phil Plait

What does our galaxy look like if you could see it from the outside?

This is a pretty interesting question. For a long time, we didn't even know we lived in a galaxy that was one among hundreds of billions. Back then — up until about a century ago — it wasn't clear if everything we saw in the sky was in one giant clump, or if the little fuzzy patches we saw were actually separate galaxies (what used to be called "island Universes," which is both poetic and apt).

Then work led by Edwin Hubble (after whom the telescope is named) and done by a team of astronomers determined that the little fuzzies were indeed very far away, and therefore we must live in a separate, smaller structure as well. We already know that the Milky Way — the name of our particular galaxy — was flattened, like a disk, and we were inside it, because we see it as a broad band of light splayed across the sky. If the galaxy were spherical, we'd see stars equally in every direction. Instead, that band means the galaxy is flat.

 

We see other disk galaxies in the sky, and they show spiral structure. Over time, using a variety of methods, we learned that the Milky Way too is a flat disk with spiral arms, and also determined it has a central bulge and a long, Tic-Tac-shaped "bar" in the middle. The Sun with our solar system is located about 26-27,000 light-years from the galactic center.

But what is the exact shape of the galaxy? How many spiral arms does it have? Are they tightly wound, or do they have a more lackadaisical winding to them?

These questions aren't so easy to answer. Sophisticated techniques have allowed us to map the part of the galaxy around us, but the far side is harder to observe. For one thing, there's a lot of junk in the galaxy that obscures our view. Mostly gas and dust, it absorbs light, and the closer you look toward the center of the galaxy the thicker this stuff is.

Also, because the other side of the galaxy is far away, the sources of light (stars, gas clouds, and such) you want to observe appear closer together — it's like standing next to a forest and being able to easily separate individual trees near you, but seeing them blend together farther away. We call this kind of observation "confusion limited."

However, a team of astronomers have managed to do something rather incredible: They've pinned down the location of a star-forming gas cloud on the far side of the Milky Way, and have used it to create a better map of one of the spiral arms.

The source is a young star that's blasting out microwave radiation in what's called a maser (it's the same physics that makes a laser, but in a different "color" of light). They used a series of radio telescopes across the Earth that are linked together, creating what is essentially a telescope thousands of kilometers across. This array has phenomenal resolution: They were able to track the motion of the maser as the star physically moved in its orbit around the galaxy! Mind you, it takes more than 200 million years to circle the galaxy just once, so this motion is incredibly small.

As the Earth orbits the Sun, we see distant objects at different angles. This allowed astronomers to determine the distance to an object on the far side of the galaxy. This parallax angle is exceedingly small, but measurable. Zoom In

As the Earth orbits the Sun, we see distant objects at different angles. This allowed astronomers to determine the distance to an object on the far side of the galaxy. This parallax angle is exceedingly small, but measurable. Credit: Bill Saxton, NRAO/AUI/NSF; Robert Hurt, NASA

But once they were able to measure that, they could then see an even smaller effect: The apparent motion the object made as the Earth orbits the Sun. As the Earth moves from one side of its orbit to another, distant objects appear to move back and forth. We call this effect parallax (if you want details, I talk about how this works in Crash Course Astronomy: Distance), and it can be used to determine an object's distance: If it moves a lot it must be closer, and if it moves only a little it's farther out. If you know how much the Earth moves, and measure the angle the object moves, you can calculate its distance.

So they did! The distance they got for this maser was 66,500 light-years: Clear across the galaxy on the other side (and then some), making this the farthest object in the galaxy ever to have its distance measured this way.

The most current map of the Milky Way is shown in an artist’s representation. The Sun is directly below the galactic center, near the Orion Spur. The Scutum-Centaurus arms sweeps out to the right and above, going behind the center to the far side.Zoom In

The most current map of the Milky Way is shown in an artist’s representation. The Sun is directly below the galactic center, near the Orion Spur. The Scutum-Centaurus arms sweeps out to the right and above, going behind the center to the far side. The maser observed is almost directly opposite the Sun from the center in the S-C arm, 65,000 light years away. Credit: NASA/JPL-Caltech/R. Hurt (SSC/Caltech)

What makes this so important is that this means it must be part of a spiral arm we call the Scutum-Centaurus arm (named after the constellations we see it in). Quite a few objects on this side of the galaxy in that arm have been measured, but nothing on the far side. This measurement of the far side maser's distance means they've been able to nail down the position of the spiral arm on the other side of the galaxy.

spiral diagram

A spiral arm has an opening angle, called the pitch angle, which is how much the arm deviates from a circle. The red arrow indicates the pitch angle of this particular spiral; if it were 0° the arm would overlap the grey circle, and the larger the angle the wider flung the arm is. Credit: Morn the Gorn / Wikipedia

That yielded something of a surprise, actually. There's an angle that helps define how open or how tightly wound a spiral arm is. Called the pitch angle, if it's 0° then the arm forms a circle, and the bigger the pitch angle is the more open the arm is. Using nearby objects, the Scutum-Centaurus arm appears to have a pitch angle of 14°. But if you also include the maser to anchor the arm on the far side of the galactic center, the pitch angle is more like 22°. That's a significant change.

The Milky Way's arms are open wider than we first thought.

Or if you prefer: We're less tightly wound than we used to think.

The point here is that even though we live inside our galaxy, there's still a lot we're learning about it. It's like being in a smoke-filled room and finally being able to see things on the other side against the far wall. It helps you define the shape and structure of where you are.

And that means our scientific mindset about the Universe has changed a lot over time. The invention of the telescope showed us it was a lot bigger and more populated than we knew, and as time went on it got a whole lot bigger.

And even now, a century after we finally understood the nature of our own galaxy, this local town of ours still has some lovely surprises for us. It’s like wandering around your block and finding a creek running through it you never noticed before. What wonders lie in it, and what treasures still remain unseen?

That's why we keep looking. We live in a beautiful neighborhood, and it's fun to discover more places to visit.

7

Logo Format

Light Logo
Phil Plait

Listicle Format

No Markers

Featured Post

Standard

Article Type

News

Is News

Breaking News

Normal

Standout Article

Image icon milkywaymap.jpg

Hide Comments

Listicle

Share This Post

Loading...
Shared
7
Comments

Show the Media Gallery title

Video Hero Autoplay

Show on Hero

Hero Image
04 Dec 04:01

Elton John Working on the Harmony for Tiny Dancer in 1970

by twistedsifter

 

In this fascinating behind-the-scenes clip from 1970, we see Elton John working on the harmony of a new song called ‘Tiny Dancer‘.

Released on 7 February 1972, the song was written by Bernie Taupin and Elton John and produced by Gus Dudgeon.

 

see more videos button Elton John Working on the Harmony for Tiny Dancer in 1970

twistedsifter on facebook Elton John Working on the Harmony for Tiny Dancer in 1970

 

04 Dec 03:58

‘Berezka’, the Hypnotizing Russian Folk Dance Where the Women Seem to Float

by twistedsifter

 

By combining small and very quick steps with a floor length gown, these talented dancers appear to float across the stage in a hypnotizing Russian Folk Dance known as ‘Berezka’.

 

see more videos button Berezka, the Hypnotizing Russian Folk Dance Where the Women Seem to Float

twistedsifter on facebook Berezka, the Hypnotizing Russian Folk Dance Where the Women Seem to Float

 

27 Nov 01:53

Millions, Billions, Trillions: How to Make Sense of Numbers in the News

Anyone who can understand tens, hundreds and thousands can develop habits and skills to accurately navigate millions, billions and trillions. Stay with me, especially if you’re math-averse

-- Read more on ScientificAmerican.com
09 Nov 19:47

The Prague Astronomical Clock

In the center of Prague there's a clock the size of a building. In the center of Prague there's a clock the size of a building.


13 Oct 01:41

A-ha Did an Unplugged Version of ‘Take On Me’ and It Sounds Like a Completely Different Song

by twistedsifter

 

At a recent MTV Unplugged performance at Giske Harbour Hall in Norway, rock band a-ha performed an acoustic rendition of their hit song, ‘Take on me’.

The clip was taken from the forthcoming album/DVD, a-Ha: MTV Unplugged Summer Solstice which features 17 hits, 2 cover versions and 2 brand new songs. The performance was recorded live on June 22 and 23, 2017.

 

see more videos button A ha Did an Unplugged Version of Take On Me and It Sounds Like a Completely Different Song

twistedsifter on facebook A ha Did an Unplugged Version of Take On Me and It Sounds Like a Completely Different Song