Shared posts

13 Apr 01:12

AI-GPT: a game changer?

by michael roberts

ChatGPT is being heralded as a revolution in ‘artificial intelligence’ (AI) and has been taking the media and tech world by storm since launching in late 2022.  

According to OpenAI, ChatGPT is “an artificial intelligence trained to assist with a variety of tasks.” More specifically, it is a large language model (LLM) designed to produce human-like text and converse with people, hence the “Chat” in ChatGPT.

GPT stands for Generative Pre-trained Transformer.  The GPT models are pre-trained by human developers and then are left to learn for themselves and generate ever increasing amounts of knowledge, delivering that knowledge in an acceptable way to humans (chat).

Practically, this means you present the model with a query or request by entering it into a text box. The AI then processes this request and responds based on the information that it has available. It can do many tasks, from holding a conversation to writing an entire exam paper; from making a brand logo to composing music and more.  So much more than a simple Google-type search engine or Wikipedia, it is claimed.

Human developers are working to raise the ‘intelligence’ of GPTs.  The current version of GPT is 3.5 with 4.0 coming out by the end of this year.  And it is rumoured that ChatGPT-5 could achieve ‘artificial general intelligence’ (AGI). This means it could pass the Turing test, which is a test that determines if a computer can communicate in a manner that is indistinguishable from a human.

Will LLMs be a game changer for capitalism in this decade?  Will these self-learning machines be able to increase the productivity of labour at an unprecedented rate and so take the major economies out of their current ‘long depression’ of low real GDP, investment and income growth; and then enable the world to take new strides out of poverty?  This is the claim by some of the ‘techno-optimists’ that occupy the media. 

Let’s consider the answers to those questions.  

First, just how good and accurate are the current versions of ChatGPT?  Well, not very, just yet.  There are plenty of “facts” about the world which humans disagree on. Regular search lets you compare those versions and consider their sources. A language model might instead attempt to calculate some kind of average of every opinion it’s been trained on—which is sometimes what you want, but often is not. ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.  Let me give you some examples.

I asked ChatGPT 3.5: who is Michael Roberts, Marxist economist?  This was the reply.

This is mostly right but it is also wrong in parts (I won’t say which).

Then I asked it to review my book, The Long Depression.  This is what it said:

This gives a very ‘general’ review or synopsis of my book, but leaves out the kernel of the book’s thesis: the role of profitability in crises under capitalism.  Why, I don’t know.

So I asked this question about Marx’s law of profitability:

Again, this is broadly right – but just broadly.  The answer does not really take you very far in understanding the law.  Indeed, it is no better than Wikipedia.  Of course, you can dig (prompt) further to get more detailed answers. But there seems to be some way to go in replacing human research and analysis.

Then there is the question of the productivity of labour and jobs. Goldman Sachs economists reckon that if the technology lived up to its promise, it would bring “significant disruption” to the labour market, exposing the equivalent of 300m full-time workers across the major economies to automation of their jobs. Lawyers and administrative staff would be among those at greatest risk of becoming redundant (and probably economists).  They calculate that roughly two-thirds of jobs in the US and Europe are exposed to some degree of AI automation, based on data on the tasks typically performed in thousands of occupations. 

Most people would see less than half of their workload automated and would probably continue in their jobs, with some of their time freed up for more productive activities. In the US, this would apply to 63% of the workforce, they calculated. A further 30% working in physical or outdoor jobs would be unaffected, although their work might be susceptible to other forms of automation.

The GS economists concluded: “Our findings reveal that around 80% of the US workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted.”

With access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to 47-56% of all tasks.  About 7% of US workers are in jobs where at least half of their tasks could be done by generative AI and are vulnerable to replacement. At a global level, since manual jobs are a bigger share of employment in the developing world, GS estimates about a fifth of work could be done by AI — or about 300m full-time jobs across big economies. 

These job loss forecasts are nothing new.   In previous posts, I have outlined several forecasts on the number of jobs that will be lost to robots and AI over the next decade or more.  It appears to be huge; and not just in manual work in factories but also in so-called white-collar work.

It is in the essence of capitalist accumulation that the workers will continually face the loss of their work from capitalist investment in machines.  The replacement of human labour by machines started at the beginning of the British Industrial Revolution in the textile industry, and automation played a major role in American industrialization during the 19th century. The rapid mechanization of agriculture starting in the middle of the 19th century is another example of automation.

As Engels explained, whereas mechanisation not only shed jobs, often it also created new jobs in new sectors, as Engels noted in his book, The condition of the working class in England (1844) – see my book on Engels’ economics pp54-57.  But as Marx identified this in the 1850s: “The real facts, which are travestied by the optimism of the economists, are these: the workers, when driven out of the workshop by the machinery, are thrown onto the labour-market. Their presence in the labour-market increases the number of labour-powers which are at the disposal of capitalist exploitation…the effect of machinery, which has been represented as a compensation for the working class, is, on the contrary, a most frightful scourge. …. As soon as machinery has set free a part of the workers employed in a given branch of industry, the reserve men are also diverted into new channels of employment and become absorbed in other branches; meanwhile the original victims, during the period of transition, for the most part starve and perish.” Grundrisse. The implication here is that automation means increased precarious jobs and rising inequality.

Up to now, mechanisation has still required human labour to start and maintain it. But are we now moving towards the takeover of all tasks, and especially those requiring complexity and ideas with LLMs? And will this mean a dramatic rise in the productivity of labour so that capitalism will have a new lease of life? 

If LLMs can replace human labour and thus raise the rate of surplus value dramatically, but without a sharp rise in investment costs of physical machinery (what Marx called a rising organic composition of capital), then perhaps the average profitability of capital will jump back from its current lows.

Goldman Sachs claims that these “generative” AI systems such as ChatGPT could spark a productivity boom that would eventually raise annual global GDP by 7% over a decade.  If corporate investment in AI continued to grow at a similar pace to software investment in the 1990s, US AI investment alone could approach 1% of US GDP by 2030.

I won’t go into how GS calculates these outcomes, because the results are conjectures.  But even if we accept the results, are they such an exponential leap?  According to the latest forecasts by the World Bank, global growth is set to decline by roughly a third from the rate that prevailed in the first decade of this century—to just 2.2% a year.  And the IMF puts the average growth rate at 3% a year for the rest of this decade. 

If we add in the GS forecast of the impact of LLMs, we get about 3.0-3.5% a year for global real GDP growth, maybe – and this does not account for population growth.  In other words, the likely impact would be no better than the average seen since the 1990s.  That reminds us of what Economist Robert Solow famously said in 1987 that the “computer age was everywhere except for the productivity statistics.”

US economist Daren Acemoglu adds that not all automation technologies actually raise the productivity of labour.  That’s because companies mainly introduce automation in areas that may boost profitability, like marketing, accounting or fossil fuel technology, but not raise productivity for the economy as a whole or meet social needs. Big Tech has a particular approach to business and technology that is centered on the use of algorithms for replacing humans. It is no coincidence that companies such as Google are employing less than one tenth of the number of workers that large businesses, such as General Motors, used to do in the past. This is a consequence of Big Tech’s business model, which is based not on creating jobs but automating them.

That’s the business model for AI under capitalism.  But under cooperative commonly owned automated means of production, there are many applications of AI that instead could augment human capabilities and create new tasks in education, health care, and even in manufacturing. Acemoglu suggested that “rather than using AI for automated grading, homework help, and increasingly for substitution of algorithms for teachers, we can invest in using AI for developing more individualized, student-centric teaching methods that are calibrated to the specific strengths and weaknesses of different groups of pupils. Such technologies would lead to the employment of more teachers, as well as increasing the demand for new teacher skills — thus exactly going in the direction of creating new jobs centered on new tasks.”  And rather than reduce jobs and the livelihoods of humans, AI under common ownership and planning could reduce the hours of human labour for all.

And then there is the issue of the profitability boost provided by AI technology.  Even if LLM investment requires less physical means of production and lowers costs of such capital, the loss of human labour power could be even greater.  So Marx’s law of profitability would still apply.  It’s the great contradiction of capitalism that increasing the productivity of labour through more machines (AI) reduces the profitability of capital.  That leads to regular and recurring crises of production, investment and employment – of increasing intensity and duration.

Finally, there is the question of intelligence.  Microsoft argues that intelligence is a “very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.” Microsoft hints that LLMs could soon obtain this ‘generalised intelligence’ and surpass all human ability to think. 

But even here, there is scepticism. “The ChatGPT model is huge, but it’s not huge enough to retain every exact fact it’s encountered in its training set.  It can produce a convincing answer to anything, but that doesn’t mean it’s reflecting actual facts in its answers. You always have to stay sceptical and fact check what it tells you. Language models are also famous for “hallucinating”—for inventing new facts that fit the sentence structure despite having no basis in the underlying data.”  That’s not very encouraging.

But Guglielmo Carchedi has a more fundamental reason to deny that AI can replace human ‘intelligence’.  Carchedi and Roberts: “machines behave according only to the rules of formal logic.  Contrary to humans, machines are structurally unable to behave according to the rules of dialectical thinking. Only humans do that.”  (Capitalism in the 21st century, p167).  Here is the ChatGPT answer to the dialectical question:  “Can A be equal to A and at the same time be different from A?”  “No, it is not possible for A to be equal to A and at the same time be different from A. This would be a contradiction in terms, as the statement “A is equal to A” is a tautology and always true, while the statement “A is different from A” is a contradiction and always false. Therefore, these two statements cannot both be true at the same time.”

Machines cannot think of potential and qualitative changes.  New knowledge comes from such transformations (human), not from the extension of existing knowledge (machines).  Only human intelligence is social and can see the potential for change, in particular social change, that leads to a better life for humanity and nature.

25 Jul 08:40

Dall-E Guy

by zachcranor

comic, webcomic, dall-E, dating, AI, bar, artificial intelligence, joke, funny, google dream, deep dream, AI generated, red shirt

The post Dall-E Guy appeared first on Last Place Comics.

28 Jan 20:26

Wiki History Game

by slaporte
08 Jan 22:25

Top ten posts of 2021

by michael roberts

been enjoying this blog through 2021, here's a nice summary. also check out the post after this one, gives a forecast for 2022 (not a super surprising one tbh)

I began this blog eleven years ago. Over those years, I have posted over 1000 times with over 4 million viewings. There are currently 6100 regular followers of the blog up from 5300 last year; and 11500 followers (up from 10500 last year) of the Michael Roberts Facebook site, which I started six years ago.  On that Facebook site, I put short daily items of information or comment on economics and economic events. 

And at the beginning of 2020, I launched the Michael Roberts You Tube channel, This now has 2100 subscribers, up from 1200 in its first year. If you haven’t joined up yet, have a look at the channel, which includes presentations by me on a variety of economic subjects; interviews with other Marxist economists; and some zoom debates in which I participated.  This year there are sessions on Marx’s theory of value; his law of accumulation; and the law of profitability; also Marx’s reproduction schema as outlined in Volume Two of Capital; and an interview with Rick Kuhn on Henryk Grossman; as well as critiques of Modern Monetary Theory and imperialism.

As for the blog, 2021 has seen 540,000 views.  That’s down from 683,000 in the first year of the COVID in 2020.  But I did 82 posts in that year compared with 73 this year, the lowest yearly total since I started the blog.  So viewings per post this year actually rose to a new high.  Maybe I’m getting old, but my other excuse for fewer posts is the increased demand on my time in doing presentations, interviews, articles and, above all, in completing a new book with Guglielmo Carchedi, (entitled Capitalism in the 21st century: through the prism of value) to be published by Pluto Press in 2022. 

Where do my blog viewers come from?  From over 170 countries globally!  Led by 140k yearly viewings in the US (or about 25%); 68k from UK (12%); then a whole range of Spanish-speaking countries led by Spain; and there were 30k viewings from Brazil, fourth largest group.  Then Canada, Australia.  India is among the top ten countries.  Right at the other end of the spectrum, I have had viewings from the Cook Islands, Rwanda, Madagascar, Guadaloupe and even the Vatican City!  Also viewings from war-torn Yemen, Afghanistan, Haiti, Syria, and Kosovo.  There were nearly 300 viewings from Cuba, over 1000 from Vietnam, but only 1800 from China.

And yet China has been the main focus of interest by blog viewers in 2021.  The top post was an analysis of the membership of the Chinese Communist Party.  New research found that of the 95m CPC members, the majority are not manual workers or peasants, but nor are they capitalists or small business people.  The main contingent are professional workers.  Professionals are defined as “all the professional and technical personnel working in science-related sectors (e.g., science, engineering, agriculture, medical care) and social science-related sector (e.g., economics, finance, law, education, press and publication, religion)”.   So the CPC is not controlled by a capitalist class, but neither is it a party of manual workers or peasants.

And other China posts made the top ten.  Viewers were keen to learn about the impending bust of the Evergrande Group, the second largest property developer in China, which is saddled with almost Rmb2tn of total liabilities or over $300bn.  The explosion of the Chinese real estate sector has been driven by the need to house the influx of urban workers to the cities.  Real estate construction now accounts for 13% of the economy from just 5% in 1995 and for about 28% of the nation’s total lending.  But housing projects have been implemented through private developers for sale, not public projects for rent.  And these developers and their billionaire owners have racked up huge amounts of debt. 

In the post, I argued that that there was not going to be a financial crash in China.  The government controls banking, including the central bank, the big four state-owned commercial banks which are the largest banks in the world, and the so-called ‘bad banks’, which absorb bad loans, big asset managers, most of the largest companies. So the government can tell state-owned asset managers and pension funds to buy shares and bonds to prop up property prices and to fund property companies. And it can tell the central bank, the People’s Bank of China, to do ‘whatever it takes’.  And it can tell the state bad banks to buy bad debt from commercial banks. 

Nevertheless, the Evergrande mess is a symptom that the growing size and influence of the capitalist sector in China has weakened the performance of the economy and widened inequalities.  The Chinese economy is now strong enough not to rely on foreign investment or on unproductive capitalist sectors like property for growth.  So increasing the role of planning and state-led investment, the main basis of China’s economic success over the 70 years of the People’s Republic, has never been more compelling.

In another top ten post, I discussed the new policy shift of the CPC in vowing to end what it called a “disorderly expansion of capital”.  The Chinese leadership was responding to a public backlash over increasing inequality, the cost of education and healthcare and conspicuous consumption. 

The CPC has launched a crackdown on the consumer tech and media giants and introduced curbs on private education and speculative property development.  It has also banned cryptocurrency operations. The contradiction between China’s party/state-controlled economy alongside a large and growing capitalist sector has intensified during the COVID pandemic.  That contradiction is manifested in how to raise productivity to meet the social needs of 1.4bn people, in the face of vagaries of the profitability of its increasingly unproductive capitalist sector. China’s workforce is falling in size; productivity growth has been slowing and China faces a technology and trade war with the US and its imperialist allies.  These are the challenges for China over the next decade or so.

Apart from China, blog readers were also interested in issues relating to Marxist economic theory.  Second in the list of ten posts was one about the role of the rate and mass of profit in capitalist crises.  This post centred on the criticism made by David Harvey, the prominent Marxist scholar, of Marx’s law of profitability, as presented by me in particular.  Harvey claims that the movement of the mass of profit is ignored by ‘profitability promoters’ like myself.  In the post, I outlined my answer to this charge and showed that it is not the case.  Marx’s ‘double-edge’ law of the connection between the rate and mass of profit is not a refutation of the law of profitability as the underlying cause of crises; on the contrary, it is integrally connected. And alternative ‘multiple’ causes (like underconsumption, ‘too much surplus to absorb’, disproportion, financial fragility etc) remain unconvincing and unproven in comparison.

A world rate of profit (%): is the rate or the mass of profit that matters in crises?

The next most popular post was on the relative decline of US imperialism, as exposed in the ignominius US withdrawal from Afghanistan after 20 years of occupation.  In this post, I argued that the relative decline of the dollar will continue even if the Afghanistan debacle is not a tipping point.

During the year of the COVID 2020, output, investment and employment in nearly all the economies of the world plummeted, as lockdowns, social isolation and collapsing international trade contracted output and spending.  And yet the opposite was the case for the stock and bond markets of the major economies.  The US stock market indexes (along with others) ended 2020 at all-time highs and repeated that result in 2021.   In another top ten post, I explained the reasons for this and the role of what Marx called ‘fictitious capital’.  As Engels first said, speculating in financial markets is a major counteracting factor to falling profitability in the ‘real economy’.  But all good things must come to an end. 

And in another popular post, I discussed the profusion of financial scandals and busts that took place over the year.  In particular, the GameStop saga showed that company and personal pension funds run by the ‘smart people’ are really a rip-off for working people. What is needed are state-funded pensions not subject to the volatility of the financial game.  As Marx said, the financial system “develops the motive of capitalist production”, namely “enrichment by exploitation of others’ labour, into the purest and most colossal system of gambling and swindling and restricts even more the already small number of exploiters of social wealth”.

Alongside rocketing stock markets, the ‘real’ economy has started to experience rising inflation of the prices of goods and services.  In several posts, I discussed the inadequacies of mainstream economics explanations and tried to present a Marxist theory of inflation.  One post concluded that the US monetary and fiscal authorities may think they can control inflation (although the evidence is clear that they did not in the 1970s and have not controlled ‘disinflation’ in the last ten years).  But they can do little to get the US economy onto a sustained strong pace of growth in GDP, investment and employment.  So the US economy over the next few years is more likely to suffer from stagflation, than from inflationary ‘overheating’.

US annual consumer goods and services inflation rate %

China may have its problems in sustaining economic growth, improving living standards and reducing inequality, but that is nothing to what faces US President Biden in the remainder of his presidency.  Biden’s plan to boost the economy with infrastructure spending and fiscal stimulus has been floundering.  And my post earlier in the year predicted the failure of such policies to put the US economy onto a path of sustained higher growth.  If disillusionment in Biden’s policies rises, as it has, that could lay the political base for the return of something like Trumpism.

The last top ten post was on a long term view of productivity growth under capitalism.  It has been the historic mission of the capitalist mode of production to develop the “productive forces” (namely the technology and labour necessary to increase the output of things and services that human society needs or wants).  Indeed, it is the main claim of supporters of capitalism that it is the best (even only) system of social organisation able to develop scientific knowledge, technology and human ‘capital’, all through ‘the market’.  

But productivity growth in the major economies has been slowing for decades.  That’s because there is a basic contradiction in capitalist production. Production is for profit, not social need. And increased investment in technology that replaces value-creating labour leads to a tendency for profitability to fall. And the falling profitability of capital accumulation eventually comes into conflict with developing the productive forces. The long-term decline in the profitability of capital globally has lowered growth in productive investment and thus labour productivity growth.  Capitalism is finding it ever more difficult to expand the ‘productive forces’. It is failing in its ‘historic mission’ that Keynes was so confident of 90 years ago.

11 Nov 02:51

Fruity Zoops

by zachcranor

The post Fruity Zoops appeared first on Last Place Comics.

01 Sep 16:11


by zachcranor

ben always loved sloppy 'bees

snapple, applebees, comic, webcomic, restaurant

The post Snapplebee’s appeared first on Last Place Comics.

15 Aug 17:11

Meet Lodewijk Gelauff: Wikimedian of the Year 2021 20th Year Honouree winner

by Chris Koerner
(ZMcCune (WMF) CC BY-SA 4.0)

This year’s seven Wikimedian of the Year award winners were announced today at the 2021 virtual Wikimania convening. Read the interview below with Lodewijk Gelauff, recipient of the 20th Year Honouree Award. 

Lodewijk Gelauff is a community organizer and innovator and has been a Wikipedia editor since 2005. He is a prolific contributor, a mentor to many Wikimedians, and a volunteer for several community groups and efforts. Lodewijk is one of the initiators of Wiki Loves Monuments, the annual Wikipedia photo contest around cultural heritage, and has led this project for a decade.

Wiki Loves Monuments, ranked as the world’s largest photography competition, was born from a successful 2010 pilot by Wikimedia Netherlands, which resulted in 12,500 freely licensed images of monuments that can now be used on Wikipedia by anybody for any purpose. 

Since then, Wiki Loves Monuments has helped to collect information on 1.5 million monuments from 76 national competitions, with more than 1.7 million pictures submitted by over 60,000 participants, adding to the sum of all human knowledge gathered on Wikimedia projects. Since the pilot, Lodewijk has served on the Wiki Loves Monuments international committee for over a decade.

“I contribute to Wikimedia because of the excitement to learn something new every day, while creating an opportunity to share it with the rest of the world. What else could you wish for when spending your free time?”

Additionally, Lodewijk was one of the three Wikipedians to accept the 2015 Erasmus Prize from the King of the Netherlands on behalf of the Wikipedia Community. He is a PhD Candidate at Stanford University in Management Science & Engineering.

An initiative started in 2021, the year of Wikipedia’s 20th birthday, the Wikimedian of the Year  20th Year Honouree award celebrates a Wikimedian who has been a part of our movement since the early years and has been a significant force in growing our global communities. 

Lodewijk gets the surprise call from Jimmy Wales

This recognition is one of seven awards made this year to celebrate contributors who have made an exceptional impact on our movement. The awards were announced at this year’s virtual Wikimania celebration by Wikipedia Founder Jimmy Wales (watch the announcement!). 

“I am thrilled with this opportunity to celebrate Lodewijk, an early contributor within the movement. By creating the first-ever Wiki Loves competition, he not only built the world’s largest photo competition; he also built a creative way to introduce new people to the movement, from heritage enthusiasts to hobbyists. Throughout his leadership in our movement, Lodewijk is continually recognized for his expertise organizing communities, his thoughtful consideration of movement issues, and his role championing local communities and increased participation in free knowledge.”

– Jimmy Wales

We spoke with Lodewijk to learn more about his experiences and perspectives on the Wikimedia movement. Here are some highlights: 

Q: How does recording monuments help sustain and build awareness about a country’s heritage?

“In Dutch we have the expression ‘unknown makes unloved’ (onbekend maakt onbemind), which I think is true for so many historic buildings. If we can’t find a picture of something, it feels like it doesn’t exist. Day in and day out, heritage gets demolished — sometimes intentionally, but often also unintentionally. Photographing them is a small step in the documentation process, but also in making the knowledge about these sites accessible to a broader audience. 

In some countries, there is a struggle to maintain or even recognize a rich cultural history. I’m impressed by the work done by our local colleagues to better document and recognize cultural history. Take for example the exemplary work done by our friends in South Africa, in a struggle to lift copyright-based restrictions on photographing sites in their cultural heritage. These restrictions were making it impossible to legally photograph more recent (and hence diverse) heritage.

Q: What’s your favourite monument that you have seen?
I especially enjoy the expanding coverage of the buildings that may be less exciting by themselves, but are valuable as a set. For individual photos, I’m always charmed by context and a good story. For example, this photo of the Teapot Dome Service Station as a reminder of the Teapot Dome scandal from the 1920s (4th prize 2017 in the US). Maybe the most memorable is this photo of the Trinity Cathedral in St. Petersburg on fire, a stark reminder that we can never take our heritage for granted. But sometimes, it is the unknown heritage that just takes your breath away. I have been a secret admirer of some of the more gloomy ‘forgotten heritage’ pictures that participants have posted, like this church in Stawiszyn, Poland.

Interior of the closed Evangelical Church of the Augsburg Confession in Stawiszyn (Marian Naworski CC BY-SA 4.0)

Q: Can you share a favorite memory from your time contributing to Wikimedia?

“It’s hard to pick a single memory. Let me give you two. For almost any Wikipedian, it is probably their first edit that is a great memory. I just finalized a thesis in high school and did a lot of research on galvanization. It felt so great to be able to share the result of that work with other people, rather than let it collect dust in some drawer. 

My second memory is when we were organizing the first edition of Wiki Loves Monuments in 2010 in the Netherlands, and all the positive responses it received around the world. We quickly realized that we had a concept that could scale. It was already the next year that we managed to break the Guinness World Record for Largest Photography Competition, with organizers and participants from 18 countries. We would easily break this record again in 2012, and still hold it.”

“Nothing is as encouraging as when you see someone’s face light up when they realize that yes, they are really invited to participate, to contribute.”

Q: How have you seen the movement evolve since you first got involved?

“I started editing in 2005, and that was probably also the last year that I would hear “wiki-what?” when I told people how I liked to spend my spare time. We look back at these days as the early days, with only 100,000 articles in Dutch. But if you really think about that, this was already an amazing accomplishment. The driving motivation has remained the same: we like to share knowledge, and figure out how things fit together. 

Two things have changed dramatically within our community: we rely more and more on explicitly referenced sources, making it harder for new contributors to participate, but at the same time making it maybe easier to keep improving the quality. The second thing is that we better recognize our own faults. We understand better what kind of content is consistently under-represented, and how our community is in many ways not diverse enough to really describe the sum of all human knowledge. We’re making attempts to chip away at the content gaps that we observe, but have so much work left to do.”

Q:  What role do these campaigns play in attracting more newcomers to the Wikimedia movement?

“The majority of participants in our national competitions never contributed to any of our projects before. For some, this is their first encounter with the free knowledge movement, and maybe the first time they realize that if they contribute something, it can actually change Wikipedia, that website that they use so frequently. 

Nothing is as encouraging as when you see someone’s face light up when they realize that yes, they are really invited to participate, to contribute. But at the same time, we also see that Wiki Loves Monuments can be a catalyst for local communities to organize themselves, and take it up as one of their first structured activities. These new contributors, or their local heritage partners can even help out in organizing the competition. This is all possible because we try to keep the concept as simple and structured as we can: it is very clear what to photograph, and why we care about it. There is no question about its relevance, and how you can help. Find a heritage site, photograph it and upload it. It’s that simple. If you get excited, you can help document it further.”

“We understand better what kind of content is consistently under-represented… We’re making attempts to chip away at the content gaps that we observe, but have so much work left to do .”

Q: What motivates you to contribute to Wikimedia projects?

“The excitement to learn something new every day, while creating an opportunity to share it with the rest of the world. And on top of that, I get to collaborate with a ton of people that share this same excitement. What else could you wish for when spending your free time?”

Congratulations, Lodewijk!

About the 2021 Wikimedian of the Year Awards

The Wikimedian of the Year is an annual award that honours contributors to Wikimedia projects, including Wikipedia editors, to highlight major achievements within the Wikimedia movement in the previous year. The tradition dates back to 2011 and has evolved since then in dynamic ways to welcome and celebrate Wikimedians from different backgrounds and experiences. This year’s celebration is bigger and more inclusive than ever before, recognizing seven exceptional contributors to the Wikimedia movement in six categories, including Newcomer of the Year, 20th Year Honouree, Rich Media and Tech contributors, and Honourable Mentions, as well as the Wikimedian of the Year. 

*This interview has been edited for clarity and length.

15 Aug 15:44

Support us to keep Hype Machine running


seems like they're up to a sustainable 3k supporters at least

14 Aug 20:49

The productivity crisis

by michael roberts

amazing, been thinking about this a lot. we're at a moment with technology where it seems to be adding as much drag as thrust; on a personal level, distraction vs efficiency.Profits over social needs leads to a missed opportunity. It feels like a deployment failure, at least, to get over these humps while we've had the momentum.

It has been the historic mission of the capitalist mode of production to develop the “productive forces” (namely the technology and labour necessary to increase the output of things and services that human society needs or wants).  Indeed, it is the main claim of supporters of capitalism that it is the best (even only) system of social organisation able to develop scientific knowledge, technology and human ‘capital’, all through ‘the market’. 

The development of the productive forces in human history is best measured by the level and pace of change in the productivity of labour.  And there is no doubt, as Marx and Engels first argued in the Communist Manifesto, that capitalism has been the most successful system so far in raising the productivity of labour to produce more goods and services for humanity (indeed, see my recent post).  In the graph below, we can see the accelerated rise in the productivity of labour from the 1800s onwards.

The rise of productivity under capitalism

But Marx also argued that the underlying contradiction of the capitalist mode of production is between profit and productivity.  Rising productivity of labour should lead to improved living standards for humanity including reducing the hours, weeks and years of toil in producing goods and services for all.  But under capitalism, even with rising labour productivity, global poverty remains, inequalities of income and wealth are rising and the bulk of humanity has not been freed from daily toil.

Back in 1930, John Maynard Keynes was an esteemed proponent of the benefits of capitalism.  He argued that if the capitalist economy was ‘managed’ well (by the likes of wise men like himself), then capitalism could eventually deliver, through science and technology, a world of leisure for the majority and the end of toil.  This is what he told an audience of his Cambridge University students in a lecture during the depth of the Great Depression of the 1930s.  He said: yes, things look bad for capitalism now in this depression, but don’t be seduced into opting for socialism or communism (as many students were thinking then), because by the time of your grandchildren, thanks to technology and the consequent rise in the productivity of labour, everybody will be working a 15-hour week and the economic problem will not be one of toil but leisure. (Economic Possibilities for Our Grandchildren, in his Essays in Persuasion)

Keynes concluded: “I draw the conclusion that, assuming no important wars and no important increase in population, the ‘economic problem’ may be solved, or be at least within sight of solution, within a hundred years. This means that the economic problem is not – if we look into the future – the permanent problem of the human race.”  From this quote alone, we can see the failure of Keynes prognosis: no wars? (speaking just ten years before a second world war).  And he never refers to the colonial world in his forecast, just the advanced capitalist economies; and he never refers to the inequalities of income and wealth that have risen sharply since the 1930s.  And as we approach the 100 years set by Keynes, there is little sign that the ‘economic problem’ has been solved.

Keynes continued: “for the first time since his creation man will be faced with his real, his permanent problem – how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest (!MR) will have won for him, to live wisely and agreeably and well.” Keynes predicted superabundance and a three-hour day – the socialist dream, but under capitalism.  Well, the average working week in the US in 1930 – if you had a job – was about 50 hours.  It is still above 40 hours (including overtime) now for full-time permanent employment. Indeed, in 1980, the average hours worked in a year was about 1800 in the advanced economies.  Currently, it is still about 1800 hours – so again, no change there.

But even more disastrous for the capitalist mission and Keynes’ forecasts is that in the last 50 years from about the 1970s to now, growth in the productivity of labour has been slowing in all the major capitalist economies.  Capitalism is not fulfilling its only claim to fame – expanding the productive forces.  Instead it is showing serious signs of exhaustion.  Indeed, as inequality rises, productivity growth falls.

Economic growth depends on two factors: 1) the size of employed workforce and 2) the productivity of that workforce.  On the first factor, the advanced capitalist economies are running out of more human labour power. But let’s concentrate on the second factor in this post: the productivity of labour. Labour productivity growth globally has been slowing for 50 years and looks like continuing to do so.

For the top eleven economies (this excludes China), productivity growth has dropped to a trend rate of just 0.7% p.a.

Why is productivity growth in the major economies falling? The ‘productivity puzzle’ (as the mainstream economists like to call it) has been debated about for some time now.  The ‘demand pull’ Keynesian explanation is that capitalism is in secular stagnation due to a lack of effective demand needed to encourage capitalists to invest in productivity-enhancing technology. Then there is the supply-side argument from others that there are not enough effective productivity-enhancing technologies to invest in anyway – the day of the computer, the internet etc, is nearly over and there is nothing new that will have the same impact.

Look at the average growth rates of labour productivity in the most important capitalist economies since the 1890s.  Note in every case, the rate of growth between 1890-1910 was higher than 2006-18.  Broadly speaking, labour productivity growth peaked in the 1950s and fell back in succeeding decades to reach the lows we see in the last 20 years.  The so-called Golden Age of 1950-60s marked the peak of the development of the ‘productive forces’ under global capital.  Since then, it has been downhill at an accelerating pace.  Annual average productivity growth in France is down 87% since the 1960s; Germany the same; in Japan it is down 90%; the UK down 80% and only the US is a little better, down only 60%.

There are three factors behind productivity growth: the amount of labour employed; the amount invested in machinery and technology; and the X-factor of the quality and innovatory skill of the workforce.  Mainstream growth accounting calls this last factor, total factor productivity (TFP), measured as the ‘unaccounted for’ contribution to productivity growth after capital invested and labour employed. This last factor is in secular decline.

Corresponding to this slowing of labour productivity is the secular fall in the fixed asset investment to GDP in the advanced economies in the last 50 years ie starting from the 1970s.

Investment to GDP has declined in all the major economies since 2007 (with the exception of China). In 1980, both advanced capitalist economies and ‘emerging’ capitalist ones (ex-China) had investment rates around 25% of GDP.  Now the rate averages around 22%, a more than 10% decline.  The rate fell below 20% for advanced economies during the Great Recession.

The slowdown in both investment and productivity growth began in the 1970s. And this is no accident. The secular slowing of productivity growth is clearly linked to the secular slowing of more investment in productive value-creating assets.  There is new evidence to show this.  In a comprehensive study, four mainstream economists have decomposed the causal components of the fall in productivity growth. 

For the US, they find that, of a total slowdown of 1.6%pts in average annual productivity growth since the 1970s, 70bp or about 45% was due to slowing investment, either caused by recurring crises or by structural factors.  Another 20bp or 13% was due to ‘mismeasurement’ (this is a recent argument trying to claim that there has been no fall in productivity growth).  Another 17% was due to the rise of ‘intangibles’ (investment in ‘goodwill’) that does not show an increase in fixed assets (this begs the question of whether ‘intangibles’ like ”goodwill’’ are really value-creating). About 9% is due to the decline in global trade growth since the early 2000s; and finally near 25% is due to investment by capitalists into unproductive sectors like property and finance.  The four economists sum up their conclusions: “Comparing the post-2005 period with the preceding decade for 5 advanced economies, we seek to explain a slowdown of 0.8 to 1.8pp. We trace most of this to lower contributions of TFP and capital deepening, with manufacturing accounting for the biggest sectoral share of the slowdown.”

In other words, if we exclude ‘intangibles’, mismeasurement and unproductive investment, the cause of lower productivity growth is lower investment growth in productive assets.  The paper also notes that there has been no reduction in scientific research and development, on the contrary.  It is just that new technical advances are not being applied by capitalists into investment.  Now maybe, the rise of robots and AI is going to give a productivity boost in the major economies in the post-COVID world.  But don’t count on it.  As the great productivity theorist of the 1980s, Robert Solow, put it in a famous quip ‘you can see the computer age everywhere but in the productivity statistics’ (Solow 1987). 

If investment is key to productivity growth, the next question follows: why did investment begin to drop off from the 1970s? Is it really a ‘lack of effective demand’ or a lack of productivity-generating technologies as the mainstream has argued? More likely it is the Marxist explanation.  Since the 1960s businesses in the major economies have experienced a secular fall in the profitability of capital and so find it increasingly unprofitable to invest in heaps of new technology to replace labour.

And when you compare the changes in the productivity of labour and the profitability of capital in the US, you find a close correlation. 

Source: Penn World Tables 10.0 (IRR series), TED Conference Board output per employee series

I also find a positive correlation of 0.74 between changes in investment and labour productivity in the US from 1968 to 2014 (based on Extended Penn World Tables). And the correlation between changes in the rate of profit and investment is also strongly positive at 0.47, while the correlation between changes in profitability and labour productivity is even higher at 0.67.

And as the new mainstream study also concludes, there is another key factor that has led to a decline in investment in productive labour: the switch by capitalists to speculating in ‘fictitious capital’ in the expectation that gains from buying and selling shares and bonds will deliver better returns than investment in technology to make things or deliver services. As profitability in productive investment fell, investment in financial assets became increasingly attractive and so there was a fall in what the new study calls “allocative efficiency” in investment. This has accelerated during the COVID slump. 

There is a basic contradiction in capitalist production. Production is for profit, not social need. And increased investment in technology that replaces value-creating labour leads to a tendency for profitability to fall. And the falling profitability of capital accumulation eventually comes into conflict with developing the productive forces.  The long-term decline in the profitability of capital globally has lowered growth in productive investment and thus labour productivity growth.  Capitalism is finding it ever more difficult to expand the ‘productive forces’.  It is failing in its ‘historic mission’ that Keynes was so confident of 90 years ago.

14 Aug 20:41

Inflation, interest rates and debt

by michael roberts

Inflation of the prices of goods and services is good or bad news depending on your relation to the means of production.  For labour, with no ownership of the means of production and only making a living from selling its power to work, inflation is not good news, because it eats into real incomes by increasing the prices of necessaries. 

Currently, as the major economies come out of the pandemic slump, employers are increasingly complaining that they cannot get staff to return to their low-paid jobs in the leisure, hospitality and other service industries.  They are being forced to bid up wage rates to attract people back into jobs with little satisfaction, poor conditions, no unions, no sick pay, no holiday pay etc. 

The prospect of higher wages sounds like good news for layers of workers previously on minimum wage levels or even below.  But higher wages are a monetary or prices illusion if at the same time prices for food and other necessaries start to rise sharply.  And they are.  The official US inflation rate hit 5% yoy in May. This was the highest reading since August 2008.  It’s the same story in the UK and Europe. Even though the level of inflation is only about 2% a year there, that rate is the highest for over seven years.

The rate is partly the result of ‘base effects’ ie the rate dropped sharply during the pandemic slump and prices have just bounced back in the last few months.  But it is also the result of sharp rises in commodity prices (agricultural products, metals and energy) driven by a slow return to production of these goods globally and also a partial breakdown in the international ‘supply chain’ of trade caused by lockdowns and restrictions on movement.  In effect, there are ‘bottlenecks’ in supply that make it difficult to meet rising consumer and producer demand.  That’s driving up the rate of inflation in prices.

Inflation may be bad news for labour, but ‘moderate’ inflation is not bad news for capital.  Companies like a little inflation because it gives them some leeway to raise prices to sustain profitability in competition with others.  But what capital does not like is accelerating inflation.  That delivers a host of problems.  Raw material prices become uncontrollable, employees start demanding higher wages and there is a serious risk that interest rates start to rise, making borrowing more expensive.  So inflation of prices as such is not an issue for capitalists: what they hate are two things that might arise from accelerating inflation: rising wages and rising interest rates.  The former eats directly into profits at the bottom and the latter drives up borrowing costs and so clips profits off from the top. 

Now Keynesians argue that wage rises are good news for all, workers and capitalists, as higher wages will boost ‘effective demand’ and get economies going.  But they also hint that capitalists need not worry about higher wages because if ‘wage-push’ inflation follows (ie capitalists raise their prices in response to wage increases), workers will eventually lose with one hand in real terms what they gain with the other – and profitability for capital will be preserved.  This circuitous argument allows Keynesian theory to claim that wage rises are good and won’t hurt capitalists – but at the end of the circuit of argument, we find that it is labour that loses or at least gains nothing.

However, as Marx explained in his famous debate with trade unionist Thomas Weston on whether wages will cause price rises, that this argument is really anti-labour and workers should not fall for it.  Moreover, wage rises mean lower profits, other things being equal, not higher prices. That is why capitalists oppose wage rises to the bitter end, despite the Keynesian calls. 

You see, it’s profitability that decides investment and production, not ‘effective demand’.  As Marx said, wages are the dependent variable not the determining factor in capitalist production: “the rate of accumulation is the independent, not the dependent, variable; the rate of wages the dependent, not the independent variable”, and “(t)he rise of wages (…) is confined within limits that not only leave intact the foundations of the capitalistic system, but also secure its reproduction on a progressive scale”.  In other words, wage rises cannot rise to the point that they seriously threaten profits.  If they do, governments will intervene with so-called ‘incomes policies’ to control wages and impose taxes to reduce income gains – by the way, policies in the past that were supported by Keynesians to control wage-push inflation.

The current data on wages in the US are distorted because those who have been made unemployed during the pandemic were generally the lower-paid and the professional and manufacturing sectors were able to boost wages somewhat.  The current figures reflect this narrow base for wage increases.  But overall, so far, employee costs for capitalists are not rising at any faster rate than before the pandemic (in the US at about 3% a year).  Given that inflation is now 4-5% in the US, average real wages are actually falling (even though the higher-paid are doing okay).

So far, financial markets are not too worried about rising inflation.  What matters to them is whether central banks will start to raise the short-term ‘policy’ interest rate that sets the floor for all interest rates charged for borrowing money to invest, produce and speculate.  So far, financial markets have been reassured by the likes of the Federal Reserve, the ECB and the Bank of England that there will be no move to hike interest rates. 

Thus, the US stock market hit yet another all-time high last week and long-term bond yields (the main interest rate for corporations), after jumping a little on the latest inflation rate figure, fell back again after the Fed claimed that the current inflation shift was ‘transitory’ and inflation would eventually settle back to pre-pandemic levels near the Fed’s own target of 2% a year.

However, it seems that the Fed is not so sure of this ‘transitory’ future.  At its latest interest rate meeting, Fed officials were divided on their forecasts for inflation over the next few years.  The consensus view was that the ‘core inflation’ rate (that’s after discounting food and energy – hardly unimportant items for workers!) would jump to 3% this year, but then fall back to 2.1% in 2022 and 2023 even as the economy reached ‘full employment’ and maximum capacity.  So the consensus, as expressed by Fed chair, Jay Powell, was that there would be no need to raise the Fed’s policy rate until well into 2023.  However, several Fed regional presidents seemed less sure that inflation rates would drop back and given the supply ‘bottlenecks’ and the ‘sugar-rush’ in consumer demand, they have talked of an earlier move on rates.

And this is the point.  As I have argued above, capitalists, whether in productive or speculative sectors, are not really worried by inflation as it affects them little.  What worries them and their decisions on investment in productive sectors or continued speculation in financial assets is interest rates which affect the cost of borrowing relative to profitability in the ‘real’ economy and to prices in stocks and bonds. 

Indeed, as I have argued before, inflation in goods and services actually tends to slow or even disappear in capitalist economies where the production of new value added slows in growth and leads to slowing in demand from capitalists and workers.  That is the tendency of the last 40 years, for example, in the major economies as the growth in the productivity of labour has declined and average profitability of capital has fallen.  The inflation rate has fallen and efforts by central banks to achieve ‘moderate’ inflation of, say 2% a year, have failed in the US, Europe and Japan.  Short-term interest rates, generally influenced by central banks, have dropped towards zero while long-term rates, more endogenously determined by market forces, have also dropped to historic lows not seen since the 1930s.

The decline in nominal interest rates

It’s interest rates that matter because corporate debt is at record highs in most major economies, while stock markets ride on a flow of borrowed money.  So any jump in the cost of borrowing could be devastating to many companies and trigger a stock and bond market collapse. 

Source: BIS, author’s calculations

I have discussed before the fact that between 15-20% of companies in the major economies are barely covering the debt servicing costs with the profits they are making. According to Bloomberg, in the US, almost 200 big corporations have joined the ranks of so-called zombie firms since the onset of the pandemic and now account for 20% of top 3000 largest publicly-traded companies. With debts of $1.36 trillion. That’s 527 of the 3000 companies didn’t earn enough to meet their interest payments!

Debt servicing costs have been falling on average even though debt is piling up.  That’s because of the sharply falling cost of borrowing.  If that scenario should start to reverse, then the possibility of corporate bankruptcies and a financial crash becomes a probability. The US corporate debt servicing ratio (debt costs to corporate income) has jumped to a 20-year high in the pandemic. And if corporate bankruptcies (currently very low) start to emerge, the banking system may come under pressure. 

Source: BIS, author’s calculations

Recently, the Federal Reserve conducted a ‘financial stress test’ on US banks.  It found that nearly all were in good shape with plenty of spare capital to cover any loan losses, certainly compared to before the global financial crash of 2008-9.  They were in such good shape, that they could plan to pay shareholders increased dividends and buy back shares to boost prices.  However, while the large ‘retail’ banks seemed in good shape, it was less so for the huge investment banks which provide funds for speculation in financial assets and speculate themselves.  They are required to have higher capital ratios (capital relative to loans and financial assets) and the gap between the minimum requirement and their ratios is much less.

Accelerating inflation may be an issue right now in the US and other recovering capitalist economies, and it certainly bites into any recovery in labour incomes; but for capitalism, profitability is the real benchmark and that can be hit by wage rises on the one hand and interest rises on the other. If it is, that is the basis for a new slump.

14 Aug 20:18

Crunchatize Me, Cap’n!

by zachcranor


cap'm crunch captain crunch akira apocolypse ruins parody webcomic comic tetsuo transform cult


The post Crunchatize Me, Cap’n! appeared first on Last Place Comics.

14 Aug 20:11

Original Character

by zachcranor

last place comics original character OC slip knot embrassing art adolescent 13 year old artist crimson twilight blood vein fan art

The post Original Character appeared first on Last Place Comics.

14 Aug 20:07

Hot New Dance

by zachcranor

comic webcomic last place comics hot new dance silly slide to the left crushed against wall

The post Hot New Dance appeared first on Last Place Comics.

05 Aug 16:52

U.S. Cyber Command's Cyber National Meme Force

by jwz

tax $$$$

The Pentagon doesn't meme like you or I. Before the DoD's cyber warriors can shitpost, images must be approved, tweets drafted and redrafted, and everything has to go through the chain of command.

From conception to deployment, the picture of the Soviet bear dropping malware candy took 22 days. The tweet got 364 likes and was retweeted 190 times. [...]

The bumbling bear is part of an effort by U.S. Cyber Command to make Russian hackers look uncool online. "We don't want something they can put on T-shirts, we want something that's in a PowerPoint their boss sees and he loses his shit on them," an anonymous U.S. official told CyberScoop in November, 2020. [...]

Cyber Command's response to the report contained a detailed explanation of why it's making bad memes. According to Cyber Command, they "impose costs on adversaries by disclosing their malware," and the graphics "are used and included to increase engagement and resonate with the Cybersecurity industry." Though it did admit that "the graphics may not be shaping adversary behavior."

Previously, previously, previously, previously.

05 Aug 16:48

日本語版ウィキペディアで「歴史修正主義」が広がる理由と解決策   | 佐藤由美子の音楽療法日記

by slaporte

well that's no good. also, second only to English??

01 Jun 08:02

DMCA Notice Targets TorrentFreak, Netflix, and Reddit’s Wikipedia Pages

by Ernesto Van der Sar

lol #stephen

wikipedia eraseOver the past several years, copyright holders have asked Google to remove billions of links to allegedly pirated content.

Most of these DMCA notices are pretty accurate. However, we keep stumbling on glaring errors, which are often hard to explain.

The Score Group Misses

Today we have another example. Late last month, adult entertainment distributor The Score Group sent Google a takedown notice identifying more than 300 copyright infringing URLs.

A quick glance at the request indeed shows that the notice includes several problematic links. However, it also lists more than two dozen Wikipedia pages. This includes the Wikipedia entries of well-known pirate brands such as YIFY, BTDigg, and KickassTorrents.

These Wikipedia pages don’t list or link to any infringing material. They clearly shouldn’t be removed but, in a way, it’s understandable since these URLs were probably caught up in an automated keyword filter.

wikipedia takedowns

Unfortunately, however, it doesn’t stop there. For reasons unknown, the list of ‘copyright infringing’ Wikipedia entries also includes TorrentFreak and other news sites such as The Verge and The Financial Times. The same is true for the movie review sites IMDb and Rotten Tomatoes.

Targeting Wikipedia’s Wikipedia Entry…

And it goes on. The Wikipedia entries for Domino’s Pizza and Project Gutenberg were also marked, and just when we thought we’d seen it all, we spotted the Wikipedia entry for Wikipedia itself.

It remains a mystery how these links ended up in the takedown notice. None of these sites or their Wikipedia entries have a clear connection to the adult entertainment company and they are perfectly legal.

Good and Bad News

The good news is that Google spotted all of these errors. This means that the links haven’t been removed from its search results.

The same is true for the IMDb pages for “Iron Man 2,” “Elmo’s World: Reach for the Sky,” and “Ernest Scared Stupid” which The Score Group tried to take offline with a separate DMCA notice. The company even went after the American Bar Association, which should be able to confirm that this isn’t how the DMCA law is supposed to work.

It is worth keeping an eye on these types of mistakes. While Google is great at spotting overbroad takedown notices, it occasionally misses some as well, which results in perfectly legal URLs being removed.

From: TF, for the latest news on copyright battles, piracy and more.

01 Jun 08:02

The Pirate Bay Remains Resilient, 15 Years After The Raid

by Ernesto Van der Sar

A good tradition

There are a handful of traditions we have at TorrentFreak, and remembering the first raid on The Pirate Bay is one of them.

Not only was it the first major story we covered, it also had a significant impact on how the piracy ecosystem evolved over the years. It also changed the lives of the site’s co-founders, who were eventually convicted.

While a lot has changed over the years, The Pirate Bay is still around and there are no signs that this will change anytime soon. What many people may not realize, however, is that without a few essential keystrokes in the site’s early years, the site would be a distant memory today.

This is what happened.

May 31, 2006, less than three years after The Pirate Bay was founded, 65 Swedish police officers entered a datacenter in Stockholm. The Swedish police had instructions to shut down the Pirate Bay’s servers as part of a criminal probe, following pressure from the US Government.

As the police were about to enter the datacenter, Pirate Bay co-founders Gottfrid Svartholm and Fredrik Neij knew that something wasn’t quite right. In the months prior, both men noticed they were being tailed by private investigators, but this time their servers were the target.

At around 10:00 in the morning, Gottfrid told Fredrik that there were police officers at their office. He asked his colleague to get down to the co-location facility and get rid of the ‘incriminating evidence’ although none of it – whatever it was – was related to The Pirate Bay.

A Crucial Backup

As Fredrik was leaving, he suddenly realized that the problems might be linked to their torrent tracker. Just in case, he decided to make a full backup of the site.

When he later arrived at the co-location facility, those concerns turned out to be justified. There were dozens of police officers floating around taking away dozens of servers, most of which belonged to clients unrelated to The Pirate Bay.

Footage from The Pirate Bay raid

In the days that followed, it became clear that Fredrik’s decision to create a backup of the site was probably the most pivotal moment in the site’s history. Because of this backup, Fredrik and the rest of the Pirate Bay team managed to resurrect the site within three days.

“The Police Bay”

Of course, the entire situation was handled with the mockery TPB had become known for.

Unimpressed, the site’s operators renamed the site “The Police Bay”, complete with a new logo shooting cannonballs at Hollywood. A few days later this logo was replaced by a Phoenix, a reference to the site rising from its digital ashes.

Logos after the raidtpb classic

Instead of shutting it down, the raid propelled The Pirate Bay into the mainstream press, not least due to its swift resurrection. The publicity also triggered a huge traffic spike for TPB, exactly the opposite effect Hollywood had hoped for.

The US Pushed Sweden

Although the raid and the subsequent criminal investigation were carried out in Sweden, the US Government played a major role behind the scenes. For many years the scale of that involvement was unknown. However, information obtained through a Freedom of Information Act request in 2017 helped to fill in some blanks.

The trail started with a cable sent from the US Embassy in Sweden to Washington in November 2005, roughly six months before the Pirate Bay raid. The Embassy wrote that Hollywood’s MPA met with US Ambassador Bivins and, separately, with the Swedish State Secretary of Justice. The Pirate Bay was one of the top agenda items.

“The MPA is particularly concerned about PirateBay, the world‘s largest Torrent file-sharing tracker. According to the MPA and based on Embassy’s follow-up discussions, the Justice Ministry is very interested in a constructive dialogue with the US. on these concerns,” the cable read.

From the US Embassy Cable

The Embassy explained that Hollywood would like Sweden to take action against a big player such as The Pirate Bay.

“We have yet to see a ‘big fish’ tried – something the MPA badly wants to see, particularly in light of the fact that Sweden hosts the largest Bit Torrent file-sharing tracker in the world, ‘Pirate-Bay’, which openly flaunts IPR,” the cable writer commented.

Fast forward half a year and indeed, 65 police officers were ready to take The Pirate Bay’s servers offline. While there is no written evidence that the US officials were actively involved in planning the investigation or raid, indirectly they played a major role.

TPB Takedown Award

This is also backed up by further evidence. In a cable sent in April 2007, the Embassy nominated one of its employees, whose name is redacted, for the State Department’s Foreign Service National (FSN) of the year award. Again, The Pirate Bay case was cited.

“REDACTED skillful outreach directly led to a bold decision by Swedish law enforcement authorities to raid Pirate Bay and shut it down. This was recognized as a major achievement in Washington in furthering U.S. efforts to combat Internet piracy worldwide.”

We don’t know if the employee in question received his or her award. In hindsight, however, the raid did very little to deter piracy.

The Aftermath

The swift and deviant comeback turned the site’s founders into heroes for many. The site made headline news around the world and in Stockholm, people were waving pirate flags in the streets, a sentiment that benefited the newly founded Pirate Party as well.

The raid eventually resulted in negative consequences for the site’s founders. It was the start of a criminal investigation, which led to a trial, and prison sentences for several of the site’s key players.

This became another turning point. Many of the people who were involved from the early days decided to cut their ties with the site, which was handed over to a more anonymous group.

The outspokenness of the early years is gone today and it’s a mystery who currently pulls the strings. What we do know is that The Pirate Bay is still seen as a piracy icon by many. And the current operator will probably do everything he can to keep the site online, just like on May 31, 2006.

From: TF, for the latest news on copyright battles, piracy and more.

26 Apr 15:56

Sovereign Writers and Substack

by Ben Thompson

Since we're already overdue chatting about this, Stephen, we can start here if you like :)

There has been a bit of controversy around Substack over the last week; I’m not going to get into the specifics of various accusations made about various individuals, or their responses; however, I do think that there are a few fundamental issues about the Substack model specifically, and the shift to sovereign writers generally, that are being misunderstood.

I’m going to anchor on this piece from Peter Kafka at Recode:

Substack’s main business model is straightforward. It lets newsletter writers sell subscriptions to their work, and it takes 10 percent of any revenue the writers generate (writers also have to fork over another 3 percent to Stripe, the digital payments company)…

The money that Substack and its writers are generating — and how that money is split up and distributed — is of intense interest to media makers and observers, for obvious reasons. But the general thrust isn’t any different from other digital media platforms we’ve seen over the last 15 years or so.

From YouTube to Facebook to Snapchat to TikTok, big tech companies have long been trying to figure out ways they can convince people and publishers to make content for them without having to hire them as full-time content creators. That often involves cash: YouTube set the template in 2007, when it set up a system that let some video makers keep 55 percent of any ad revenue their clips generated…Like Substack, YouTube and the other big consumer tech sites fundamentally think of themselves as platforms: software set up to let users distribute their own content to as many people as possible, with as little friction as possible.

I’m not sure the connection to Facebook and YouTube hold (even with Substack Pro, which I’ll get to in a moment); As Kafka notes, Substack “lets newsletter writers sell subscriptions to their work”; that, by definition, means that Substack is not “figur[ing] out ways they can convince people and publishers to make content for them without having to hire them as full-time content creators”. Just look at my credit card statement, where I happened to find charges for Casey Newton’s Platformer and Bill Bishop’s Sinocism next to each other:

Credit card charges from Platformer and Sinocism (not Substack)

Notice that the names of the merchant, the phone number of the merchant, and the location are different — that’s because they are different merchants. Substack is a tool for Newton and Bishop to run their own business, no different than, say, mine; Kafka writes:

To be clear: You don’t need to work with a company like Substack or Ghost to create and sell your own newsletter. Ben Thompson, the business and technology writer whose successful newsletter served as the inspiration for Substack, built his own infrastructure cobbling together several services; my former colleague Dan Frommer does the same thing for his New Consumer newsletter. And Jessica Lessin, the CEO of the Information, told me on the Recode Media podcast that she’d consider letting writers use the paid newsletter tech her company has built for free.

Here’s what you see on your credit card statement for Stratechery:

Credit card statement from Stratechery

My particular flavor of membership management software is Memberful, but Memberful is not Stratechery’s publisher; I am. Memberful is a tool I happen to use to run my business, but it has no ownership of or responsibility for what I write. Moreover, Memberful — like Substack — doesn’t hold my customer’s billing data; Stripe does, and that charge is from my Stripe account, just as the first two charges are from Newton and Bishop’s respective Stripe accounts.

This is what makes “the intent interest of media makers and observers” so baffling. There is a very easy and obvious answer to “how that money is split up and distributed”: subscriber money goes to the person or publication the subscriber subscribes to. That’s it! Substack is a tool for the sovereign writer; the sovereign writer is not a Substack employee, creator, contractor, nothing. Users quite literally pay writers directly, who pass on 10% to Substack; Substack doesn’t get any other say in “how that money is split up and distributed.”

But what about Substack Pro?

Substack Pro

Back in 2017 I wrote a post called Books and Blogs that explained why subscriptions were a much better model for writers than books:

A book, at least a successful one, has a great business model: spend a lot of time and effort writing, editing, and revising it up front, and then make money selling as many identical copies as you can. The more you sell the more you profit, because the work has already been done. Of course if you are successful, the pressure is immense to write another; the payoff, though, is usually greater as well: it is much easier to sell to customers you have already sold to before than it is to find customers for the very first time…

Since then it has been an incredible journey, especially intellectually: instead of writing with a final goal in mind — a manuscript that can be printed at scale — Stratechery has become in many respects a journal of my own attempts to understand technology specifically and the way in which it is changing every aspect of society broadly. And, it turns, out, the business model is even better: instead of taking on the risk of writing a book with the hope of one-time payment from customers at the end, Stratechery subscribers fund that intellectual exploration directly and on an ongoing basis; all they ask is that I send them my journals of said exploration every day in email form.

Recurring revenue is much better than selling a book once; however, just as you have to spend time to write a book before you can sell it, you need time to build up a subscriber base that supports a full-time subscription. I accomplished this by writing Stratechery on nights and weekends while working at Microsoft and Automattic, and then, when I started charging, basically jumping off of the deep end, but working writers can’t always do the former (I would bet that publications start being stricter about this going forward). This is where Substack Pro comes in; from Kafka:

But in some cases, Substack has also shelled out one-off payments to help convince some writers to become Substack writers, and in some cases those deals are significant. Yglesias says that when it lured him to the platform last fall, Substack agreed to pay him $250,000 along with 15 percent of any subscription revenue he generates; after a year, Yglesias’s take will increase to 90 percent of his revenue, but he won’t get any additional payouts from Substack.

As Yglesias told me via Slack (he stopped working as a Vox writer last fall but still contributes to Vox’s Weeds podcast), the deal he took from Substack is actually costing him money, for now. Yglesias says he has around 9,800 paying subscribers, which might generate around $860,000 a year. Had he not taken the Substack payment, he would keep 90 percent of that, or $775,000, but under the current deal, where he’ll keep the $250,000 plus 15 percent of the gross subscription revenue, his take will be closer to $380,000.

Substack has been experimenting with this kind of offer for some time, but last week, it began officially describing them as “Substack Pro” deals.

In short, the best analogy to Substack Pro are book advances, which are definitely something that publishers do. In that case publishers give an author a negotiated amount of money in advance of writing a book for reasons that can vary; in the case of famous people the advance represents the outcome of a bidding war for what is likely to be a hit, while for a new or unknown author an advance provides for the author’s livelihood while they actually write the book. The publisher then keeps all of the proceeds of the book until the advance is paid back, and splits additional proceeds with the author, usually in an 85/15 split (sound familiar?); of course we don’t know the exact details of book deals, because they are not disclosed.1

At the same time, Substack Pro isn’t like a book advance at all in a way that is much more advantageous to the writer. Book publishers own the publishing rights and control the royalties as long as it is in print; writers in the Substack Pro program still own their customers and all of the revenue past the first year, of which they can give 10% to Substack for continued use of their tool, or switch to another tool. This is where the comparison to YouTube et al falls flat: YouTube wants to be permanently in the middle of the creator-viewer relationship, while Substack remains to the side; from this perspective Substack Pro is more akin to an unsecured bank loan — success or failure is still determined by the readers.

The Real Scam

Now granted, there may be some number of Substack Pro participants who end up earning less than their advance, particularly if Substack sees Substack Pro as more of a marketing tool to shape who uses Substack; if Substack actually runs Substack Pro like a business, though, I would expect lots of deals like the Yglesias one, which has turned out to be quite profitable for Substack. As Yglesias himself noted:

Substack Pro made it possible for Yglesias to launch Slow Boring without worrying about paying the bills, and is making a profit as a reward for bearing the risk of Yglesias not succeeding or succeeding more slowly than he needed. As for Yglesias, he may end up missing out on several hundred thousand dollars this year, but given he’s not selling a book but rather a subscription he can look forward to a huge increase in revenue next year.

This, needless to say, is not a scam, which is what Annalee Newitz argued:

For all we know, every single one of Substack’s top newsletters is supported by money from Substack. Until Substack reveals who exactly is on its payroll, its promises that anyone can make money on a newsletter are tainted. We don’t have enough data to judge whether to invest our creative energies in Substack because the company is putting its thumb on the scale by, in Hamish’s own words, giving a secret group of “financially constrained writers the ability to start building a sustainable enterprise.” We are, not to put too fine a point on it, being scammed.

It is, for the reasons I laid out above, easier to get started with a subscription business if you have an advance. No question. But this take is otherwise completely nonsensical: Substack’s top newsletters are at the top because they have the most subscribers paying the authors directly. For example, look at the “Politics” leaderboard, where Yglesias is seventh:

Substack's 'Politics' leaderboard

We already know that “Thousands of Subscribers” to Slow Boring is 9,800; given that 9,800 * $8/month = $78,400/month, we can surmise that The Weekly Dish has at least 15,680 subscribers ($78,400/month ÷ $5/month). Those are real people paying real dollars of their own volition, not because Substack is somehow magically making them do it.

Frankly, it’s hard to shake the sense that Newitz and other Substack critics simply find it hard to believe that there is actually a market for independent writers, which I can understand: I had lots of folks tell me Stratechery was a stupid idea that would never work, but the beautiful thing about being my own boss is that they don’t determine my success; my subscribers do, just as they do every author on Substack.

Still, that doesn’t change the fact there is a real unfair deal in publishing, and it has nothing to do with Substack. Go back to Yglesias: while I don’t know what he was paid by Vox (it turns out that Substack, thanks to their leaderboards, is actually far more transparent about writers’ income than nearly anywhere else), I’m guessing it was a lot less than the $380,000 he is on pace for, much less the $775,000 he would earn had he forgone an advance.2 It was Vox, in other words, that was taking advantage of Yglesias.

This overstates things, to be sure; while Yglesias built his following on his own, starting with his own blog in 2002, Vox, which Yglesias co-founded, was a team effort, including capital from Vox Media. Still, if we accept the fact that Yglesias charging readers directly is the best measurement of the value those readers ascribe to his writing, then by definition he was severely under-compensated by Vox. The same story applies to Andrew Sullivan, the author of the aforementioned The Weekly Dish; Ben Smith reported:

But Mr. Sullivan is, as his friend Johann Hari once wrote, “happiest at war with his own side,” and in the Trump era, he increasingly used the weekly column he began writing in New York magazine in 2016 to dial up criticism of the American left. When the magazine politely showed him the door last month, Mr. Sullivan left legacy media entirely and began charging his fans directly to read his column through the newsletter platform Substack, increasingly a home for star writers who strike out on their own. He was not, he emphasizes, “canceled.” In fact, he said, his income has risen from less than $200,000 to around $500,000 as a result of the move.

Make that nearly $1 million, i.e. $800,000 of surplus value that New York Magazine showed the door.

Substack Realities

Of course things aren’t so simple; Sullivan, like several of the other names on that leaderboard, are, to put it gently, controversial. That he along with other lightning-rod writers ended up on Substack is more a matter of where else would they go? Again, the entire point is that Sullivan’s readers are paying Sullivan, which means Substack was an attractive option precisely because they don’t decide who gets paid what — or if they get paid at all.

Just because Sullivan was forced to be a sovereign writer, though, doesn’t change the fact that writers who can command a paying audience have heretofore been significantly underpaid. That points to the real reason why the media has reason to fear Substack: it’s not that Substack will compete with existing publications for their best writers, but rather that Substack makes it easy for the best writers to discover their actual market value.

This is where Substack really is comparable to Facebook and the other big tech companies; the media’s revenue problems are a function of the Internet unbundling editorial and advertising. The fact that Google and Facebook now make a lot of money from advertising is unrelated. Similarly, media’s impending cost problem — as in they will no longer be able to afford writers that can command a paying audience — is a function of the Internet making it possible to go direct; that Substack is one of many tools competing to make this easier will be similarly unrelated.

This explains three other Substack realities:

  • First, Substack is going to have a serious problem retaining its most profitable writers unless it substantially reduces its 10% take.
  • Second, Substack is less threatened by Twitter and Facebook than many think; the problem with the social networks is that they want to own the reader, but the entire point of the sovereign writer is that they own their audience. Substack’s real threat will be lower-priced competitors.
  • Third, it would be suicidal for Substack to kick any successful writers off of its platform for anything other than gross violations of the law or its terms of service. That would be a signal for every successful writer to seek out a platform that is not just cheaper, but also freer (i.e. open-source).

This is also why Substack Pro is a good idea. To be honest, I was a tad bit generous above: signing up someone like Yglesias is closer to the “popular author bidding war” side of the spectrum, and may not be worth the trouble; what would be truly valuable is helping the next great writer build a business, perhaps in exchange for more lock-in or the rights to bundle their work. Ideally these writers would be the sort of folks who would have never gotten a shot in traditional media, because they don’t fit the profile of a typical person in media, and/or want to cover a niche that no one has ever thought to cover (these are almost certainly related).

The Sovereign Writer

I am by no means an impartial observer here; obviously I believe in the viability of the sovereign writer. I would also like to believe that Stratechery is an example of how this model can make for a better world: I went the independent publishing route because I had no other choice (believe me, I tried).

At the same time, I suspect we have only begun to appreciate how destructive this new reality will be for many media organizations. Sovereign writers, particularly those focused on analysis and opinion, depend on journalists actually reporting the news. This second unbundling, though, will divert more and more revenue to the former at the expense of the latter. Maybe one day Substack, if it succeeds, might be the steward of a Substack Journalism program that offers a way for opinion writers and analysts to support those that undergird their work.3

What is important to understand, though, is that Substack is not in control of this process. The sovereign writer is another product of the Internet, and Substack will succeed to the extent it serves their interests, and be discarded if it does not.

I wrote a follow-up to this Article in this Daily Update.

  1. Substack reportedly pays 15% of all revenue, not just revenue above-and-beyond the advance.
  2. There is an anonymous Google Doc with self-reported media salaries; only three individuals make more than $380,000, and none more than $775,000
  3. I am still bullish on the Local News Business Model.
26 Apr 06:25


by Nicholas Gurewitch

The post Geopardy appeared first on The Perry Bible Fellowship.

18 Apr 00:33

Sumana Harihareswara - Cogito, Ergo Sumana: Trying to Notice What's Missing


i've been thinking about this too... i suspect yes.

I'm ploughing through some open source project email threads and thinking:

In 2010, people got together in Berlin for a Wikimedia developers' meeting .... and then a bunch of them hung around a lot longer than they'd expected, because a volcano erupted and so their flights got cancelled. As I understand it, you can trace certain architectural decisions and improvements to the discussions and pair programming from that chunk of unexpected extra in-person time.

It's conference season, at least in the northern hemisphere, and we're going into our second year of virtualized or missing technology conferences. The maintainers, users, and stakeholders of the open source software you depend on have gone more than a year without getting to quietly gossip with each other over a snack or while walking to a sponsored party. It's been more than a year since one guy has been able to make that other guy laugh and remember "ah, he's not so bad really". It's been more than a year since people could easily scribble boxes and arrows together on the back of a conference schedule or poke at the demo on someone's laptop.

We come together every once in a while to refill on trust and camaraderie and a shared understanding of what we're trying to do and who we're trying to do it for; I assume that, for some folks, those wells have now run dry.

In a tree's rings you can see the years of drought. Where, in our code and our conversations, will we see the record of this separation? Do you already see it?

18 Apr 00:32

Sumana Harihareswara - Cogito, Ergo Sumana: Python Packaging Tools: Security Work And An Open Position


800k is way more than CZI granted even. Looking into how TUF plays into it now.

Two exciting bits of news regarding massively improving how we package, distribute, and install Python software!

First: a new grant. New York University (specifically Professor Justin Cappos) and I have successfully asked the US National Science Foundation for a grant to improve Python packaging security. The NSF is awarding NYU $800,000 over two years, from mid-2021 to mid-2023, to further improve the pip dependency resolver and to integrate The Update Framework further into the packaging toolchain. I shared more details in this announcement on an official Python packaging forum.

I'll be part of this work, paid to work on this part-time, doing some outreach, coordination, project management, and similar. Thanks to the NSF, Justin, the Secure Systems Lab at NYU, and all the people who work on Python packaging tools!

Second: the Python Software Foundation is hiring a full-time project manager and community manager for Python's packaging toolchain. Thanks to Bloomberg for the funding! Please check out the job description and spread the news. Please apply by May 18th, 2021.

The job is remote and you can apply from anywhere in the world. As the description says: "Total compensation will range from $100k-$125k USD based on qualifications and experience." And you'd report to Ee W. Durbin III, a colleague I strongly recommend and love working with.

I'm thoroughly grateful that we've now gotten to the point where the PSF can hire for a full-time person for this role. As a volunteer and as a contractor, I've performed -- in many cases initiated -- the activities that this person will do, and I've seen the critical need. We deeply need a full-time coordinator for holistically assessing and improving the user and developer experience of Python packaging, because -- as Russell Keith-Magee said in his PyCon US 2019 keynote -- the status quo poses "an existential threat" to the future of the language. And so one of the desired qualifications for the role is: "Belief that Python packaging problems are of critical importance for the Python language... but that those problems are solvable."

We've gotten better and better at attracting corporate and grant funding -- and yes, I'll take some credit for that, with my past work researching and writing grant proposals, leading funded projects, and volunteering with the Packaging Working Group and cofounding the Project Funding Working Group. So, now, what should we focus on? We need to prioritize improvements for strategic value (e.g., should we first concentrate on overhauling the Warehouse API, or making a generic wheel-builder service, or tightening metadata compliance, or ....?). What can we learn from other package management toolchains, especially those that emerged after PyPI and pip (e.g., yarn, npm, cargo), and what should we copy? In my opinion, you do not need to already have an opinion on these questions to apply for this role -- you just have to be interested in talking with a bunch of stakeholders, poking through past discussions, and collaboratively developing some answers.

I won't be applying for this PSF role -- I'm going to be, instead, excited to collaborate with that person and help them learn all the stuff I know, so that in the long run, we'll have more people, with that set of skills and domain knowledge, working on Python packaging. I'll concentrate on the Python supply chain security piece specifically (via the NSF-funded work at NYU), plus finishing my book and maybe creating and leading associated trainings, and taking what I've learned to other languages and ecosystems through client work.

So: please spread the word and apply!

18 Apr 00:27

GitHub Reinstated YouTube-DL But Restoring Forks is Apparently a Problem

by Andy Maxwell

torrentfreak still doing the real tech reporting

After the RIAA had youtube-dl removed from GitHub last year, the platform decided to reinstate the YouTube-ripping tool, claiming that the industry group's takedown was unwarranted. However, users who forked the project weren't so lucky and according to a counternotice filed this week, GitHub isn't responding to informal restoration requests. There is probably a good reason for that.

From: TF, for the latest news on copyright battles, piracy and more.

21 Mar 08:02

WALLY WINTER "The Race" | adult swim smalls

by Adult Swim

PRS really toned it down for AS lol

Created by Pilot Red Sun

Previous [as] Appearances

Follow Pilot Red Sun on Instagram:


About Adult Swim:
Get your Adult Swim fix whenever and wherever you want at, or by downloading the Adult Swim app. Binge marathons or watch selected episodes of many of your favorite shows including Rick and Morty, Robot Chicken, Venture Bros., Aqua Teen Hunger Force and many more. And check out the Live Stream, our block of live, interactive shows every weekday:

Connect with Adult Swim Online:
Download the APPS:
Visit Adult Swim WEBSITE:
Like Adult Swim on FACEBOOK:
Follow Adult Swim on TWITTER:
Follow Adult Swim on INSTAGRAM:
21 Mar 07:54

CURRENT MOOD "Birthdays" | adult swim smalls

by Adult Swim

i wonder if she'll ever meet john wilson...

Created by Sarah Shaw
Music by Shively Humperdink

Sarah Shaw is a 22 year old video creator, artist, and songwriter from Dallas, Texas. She is inspired by life’s parallels and her favorite food is ketchup.

Follow Sarah on Instagram:
Follow Shively Humperdink on Instagram:

previous [as] appearances


About Adult Swim:
Get your Adult Swim fix whenever and wherever you want at, or by downloading the Adult Swim app. Binge marathons or watch selected episodes of many of your favorite shows including Rick and Morty, Robot Chicken, Venture Bros., Aqua Teen Hunger Force and many more. And check out the Live Stream, our block of live, interactive shows every weekday:

Connect with Adult Swim Online:
Download the APPS:
Visit Adult Swim WEBSITE:
Like Adult Swim on FACEBOOK:
Follow Adult Swim on TWITTER:
Follow Adult Swim on INSTAGRAM:
21 Mar 07:49

LITHIUM "Art Project" | adult swim smalls

by Adult Swim

accidentally clicked, unexpected lol

Created by Austen Reeder
Character Rigging by Shively Humperdink

Austen Reeder is a writer/editor from Southern California. He started creating comedy videos in 2017 and still hasn’t given up in 2021. He was last seen outside the abandoned Golden Corral in Moreno Valley, CA.

Follow Austen on Instagram:
Follow Shively Humperdink on Instagram:

Previous [as] Appearances
Star Boat “Meat Hole" -
Star Boat “Star Date" -



About Adult Swim:
Get your Adult Swim fix whenever and wherever you want at, or by downloading the Adult Swim app. Binge marathons or watch selected episodes of many of your favorite shows including Rick and Morty, Robot Chicken, Venture Bros., Aqua Teen Hunger Force and many more. And check out the Live Stream, our block of live, interactive shows every weekday:

Connect with Adult Swim Online:
Download the APPS:
Visit Adult Swim WEBSITE:
Like Adult Swim on FACEBOOK:
Follow Adult Swim on TWITTER:
Follow Adult Swim on INSTAGRAM:
17 Mar 15:38

Changing the Tires on a Moving Codebase

by Mahmoud Hashemi

Upgraded SimpleLegal so future folks won't curse my name, starting at Stripe March 22.

2020 was a year of reckonings. And for all that was beyond one’s control, as the year went on, I found myself pouring more and more into the one thing that felt within reach: futureproofing of the large enterprise web application I helped build, SimpleLegal.

Now complete, this replatforming easily ranks in my most complex projects, and right now, holds the top spot for the happiest ending. That happiness comes at a cost, but with some the right approach that cost may not be as high as you think.

The Bottom Line

We took SimpleLegal’s primary product, a 300,000 line Django-1.11-Python 2.7-Redis-Postgres-10 codebase, to a Django 2.2-Python 3.8-Postgres-12 stack, on-schedule and without major site incidents. And it feels amazing.

Speaking as tech lead on the project, what did it look like? For me, something like this:

But as Director of Engineering, what did it cost? 3.5 dev years and just about $2 per line of code.

And I'm especially proud of that result, because along the way, we also substantially improved the speed and reliability of both the site and development process itself. The product now has a bright future ahead, ready to shine in sales RFPs and compliance questionnaires. Most importantly, there’ll be no worrying about when to delicately break it to a candidate that they’ll be working with unsupported technology.

In short, a large, solid investment that’s already paying for itself. If you just came here for the estimate we wish we had, you've got it. This post is all about how your team can achieve the same result, if not better.

The Setup

The story begins in 2013, when a freshly YC-incubated SimpleLegal made all the right decisions for a new SaaS LegalTech company: Python, Django, Postgres, Redis. In classic startup fashion, features came first, unless technology was a blocker. Packages were only upgraded incidentally.

By 2019, the end of this technical runway had drawn near. While Python 2 may be getting extended support from various vendors, there were precious few volunteers in sight to do Django 1 CVE patches in 2021. A web framework’s a riskier attack surface, so we finally had our compliance forcing function, and it was time to pay off our tech debt.

The Outset

So began our Tech Refresh replatforming initiative, in Q4 2019. The goal: Upgrade the stack while still shipping features, like changing the tires of a moving car. We wanted to do it carefully, and that would take time. Here are some helpful ground rules for long-running projects:

  1. Any project that gets worked on 10+ hours per week deserves a 30-minute weekly sync.
  2. Every recurring meeting deserves a log. Put it in the invite. Use that Project Log to record progress, blockers, and decisions.
  3. It’s a marathon, not a sprint. Avoid relying on working nights, weekends, and holidays.

We started with a sketch of a plan that, generously interpreted, ended up being about halfway correct. Some early guesses that turned into successes:

  1. Move to pip-tools and unpin dependencies based on extensive changelog analysis. Identify packages without py23 compatible versions. (Though we’ve since moved to poetry.)
  2. Add line coverage reporting to CI
  3. Revamp internal testing framework to allow devs to quickly write tests

More on these below. Other plans weren’t so realistic:

  1. Take our CI from ~60% to 95% line coverage in 6 months
  2. Parallelized conversion of app packages over the course of 3 months
  3. Use low traffic times around USA holidays (Thanksgiving, Christmas, New Years) to gradually roll onto the new app before 2021.

We were young! As naïve as we were, at least we knew it would be a lot of work. To help shoulder the burden, we scouted, hired, and trained three dedicated off-shore developers.

The Traction Issues

Even with added developers, by mid-2020 it was becoming obvious we were dreaming about 95% coverage, let alone 100%. Total coverage may be best practice, but 3.5 developers couldn’t cover enough ground. We were getting valuable tests, and even finding old bugs, but if we stuck with the letter of the plan, Django 2 would end up being a 2022 project. At 70%, we decided it was time to pivot.

We realized that CI is more sensitive than most users for most of the site. So we focused in on testing the highest impact code. What’s high-impact? 1) the code that fails most visibly and 2) the code that’s hardest to retry. You can build an inventory of high-impact code in under a week by looking at traffic stats, batch job schedules, and asking your support staff.

Around 80% of the codebase falls outside that high-traffic/high-impact list. What to do about that 80%? Lean in on error detection and fast time-to-fix.

The Sentry Pivot

One nice thing about startup life is that it’s easy to try new tools. One practice we’ve embraced at SimpleLegal is to reserve every 5th week for developers to work on the development process itself, like a coordinated 20% time. Even the best chef can’t cook five-star food in a messy kitchen. This was our way of cleaning up the shop and ultimately speeding up the ship.

During one such period, someone had the genius idea to add dedicated error reporting to the system, using Sentry. Within a day or two, we had a site you could visit and get stack traces. It was pretty magical, and it wasn’t until Tech Refresh that we realized that while integration takes one dev-day, full adoption can take a team months.

You see, adding Sentry to a mature-but-fast-moving system means one thing: noise. Our live site was erroring all the time. Most errors weren’t visible or didn’t block users, who in some cases had quietly learned to work around longstanding site quirks. Pretty quickly, our developers learned to treat Sentry as a repository of debugging information. A Sentry event on its own wasn’t something to be taken seriously in 2019. That changed in 2020, with the team responsible for delivering a seamless replatform needing Sentry to be something else: a responsive site quality tool.

How did we get there? First step, enhance the data flowing into Sentry by following these best practices:

  1. Split up your products into separate Sentry projects. This includes your frontend and backend.
  2. Tag your releases. Don’t tag dev env deployments with the branch, it clutters up the Releases UI. Add a separate branch tag for searches.
  3. Split up your environments. This is critical for directing alerts. Our Sentry client environment is configured by domain conventions and Django’s sites framework. If it helps, here's a baseline, we use these environments:
    • Production: Current official release. DevOps monitored.
    • Sandbox: Current official release (some companies do next release). Used by customers to test changes. DevOps monitored.
    • Demo/Sales: Previous official release. Mostly internal traffic, but external visibility at prospect demo time. DevOps monitored.
    • Canary: Next official release. Otherwise known as staging. Internal traffic. Dev monitored.
    • ProdQA: Current official release. Used internally to reproduce support issues. Dev monitored.
    • QA: Dev branches, dev release, internal traffic. Unmonitored debugging data.
    • Local test/CI: Not published to Sentry by default.

With issues finally properly tagged and searchable, we used Sentry’s new Discover tool to export issues weekly, and prioritize legacy errors. To start, we focused on high-visibility production errors with non-internal human users. Our specific query: has:user !transaction:/api/* event.type:error !user.username:*@simplelegal.*

We triaged into 4 categories: Quick fix (minor bug), Quick error (turn an opaque 500 error into a actionable 400 of some form), Spike (larger bug, requires research), and Silence (using Sentry’s ignore feature). Over 6 weeks we went from over 2500 weekly events down to less than 500.

Further efforts have gotten us under 100 events per week, spread across a handful of issues, which is more than manageable for even a lean team. While "Sentry Zero" remains the ideal, we achieved and maintained the real goal of a responsive flow, in large part thanks to the Slack integration. Our team no longer hears about server errors from our Support team. In fact, these days, we let them know when a client is having trouble and we’ve got a ticket underway.

And it really is important to develop close ties with your support team. Embedded in our strategy above was that CI is much more sensitive than a real user. While perfection is tempting, it’s not unrealistic to ask a bit of patience from an enterprise user, provided your support team is prepared. Sync with them weekly so surprise is minimized. If they’re feeling ambitious, you can teach them some Sentry basics, too.

The New Road

With noise virtually eliminated, we were ready to move fast. While the lean-in on fast-fixing Sentry issues was necessary, a strong reactive game is only useful if there are proactive changes being pushed. Here are some highlights we learned when making those changes:

Committing to transactions

Used properly, rollbacks can make it like errors never happened, the perfect complement to a fast-fix strategy.

The truly atomic request

Get as much as possible into the transactions. Turn on ATOMIC_REQUESTS, if you haven’t already. Some requests do more than change the database, though, like sending notifications and enqueuing background tasks.

At SimpleLegal, we rearchitected to defer all side effects (except logging) until a successful response was being returned. Middleware can help, but mainly we achieved this by getting rid of our Redis queue, and switching to a PostgreSQL-backed task queue/broker. This arrangement ensures that if an error occurs, the transaction is rolled back, no tasks are enqueued, and the user gets a clean failure. We spot the breakage in Sentry, toggle over to the old site to unblock, and their next retry succeeds.

Transactional test setup

Transactionality also proved key to our testing strategy. SimpleLegal had long outgrown Django’s primitive fixture system. Most tests required complex Python to set up, making tests slow to write and slow to run. To speed up both writing and running, we wrapped the whole test session in a transaction, then, before any test cases run, we set up exemplary base states. Test cases used these base states as fixtures, and rolled back to the base state after every test case. See this excerpt for details.

Better than best practices

Software scenarios vary so widely, there’s an art to knowing which advice isn’t for you. Here’s an assortment of cul de sacs we learned about firsthand.

The utility of namespaces

Given how code is divided into modules, packages, Django apps, etc., it may be tempting to treat those as units of work. Don’t start there. Code divisions can be pretty arbitrary, and it’s hard to know when you’ve pulled on a risky thread.

Assuming there are automated refactorings, as in a 2to3 conversion, start by porting by type of transformation. That way, one need only review a command and a list of paths affected. Plus, automated fixes necessarily follow a pattern, meaning more people can fix bugs arising from the refactor.

Coverage tools

Coverage was a mixed bag for us. Obviously our coverage-first strategy wasn’t tenable, but it was still useful for prioritization and status checks. On a per-change basis, we found coverage tools to be somewhat unreliable. We never got to the bottom of why coverage acted nondeterministically, and we left the conclusion at, “off-the-shelf tools like codecov are probably not targeted at monorepos of our scale.”

In running into coverage walls, we ended up exploring many other interpretations of coverage. For us, much higher-priority than line coverage were “route coverage” (i.e., every URL has at least one integration test) and “model repr coverage” (i.e., every model object had a useful text representation, useful for debugging in Sentry). With more time, we would have liked to build tools around those, and even around online-profiling based coverage statistics, to prioritize the highest traffic lines, not just the highest traffic routes. If you’ve heard of approaches to these ends, we’d love to discuss them with you.

Flattening database migrations

On the surface, reducing the number of files we needed to upgrade seems logical. Turns out, flattening migrations is a low-payoff strategy to get rid of files. Changing historical migration file structure complicated our rollout, while upgrading migrations we didn’t flatten was straightforward. Not to mention, if you just wanted the CI speedup, you can take the same page from the Open EdX Platform that we did: build a base DB cache that you check in every couple months.

Turns out, you can learn a lot from open-source applications.

Easing onto the stack

If you have more than one application, use the smaller, simpler application to pilot changes. We were lucky enough to have a separate app whose tests ran faster, making for a tighter development loop we coul learn from. Likewise, if you have more than one production environment, start rollouts with the one with the least impact.

Clone your CI jobs for the new stack, too. They’ll all fail, but resist the urge to mark them as optional. Instead, build a single-file inventory of all tests and their current testing state. We built a small extension for our test runner, pytest, which bulk skipped tests based on a status inventory file. Then, ratchet: unskip and fix a test, update the file, check that tests pass, and repeat. Much more convenient and scannable than pytest mark decorators spread throughout the codebase. See this excerpt for details.

The Rollout

In Q4 2020, we doubled up on infrastructure to run the old and new sites in parallel, backed by the same database. We got into a loop of enabling traffic to the new stack, building a queue of Sentry issues to fix, and switching it back off, while tracking the time. After around 120 hours of new stack, strategically spread around the clock and week, enough organizational confidence had been built that we could leave the site on during our most critical hours: Mondays and Tuesdays at the beginning of the month.

The sole hiccup was an AWS outage Thanksgiving week. At this point we were ahead of schedule, and enough confidence had been built in our fast-fix workflow that we didn’t need our original holiday testing windows. And for that, many thanks were given.

We kept at the fast-fix crank until we were done. Done isn't when the new system has no errors, it's when traffic on the new system has fewer events than the old system. Then, fix forward, and start scheduling time to delete the scaffolding.

The Aftermath

So, once you’re on current LTS versions of Django, Python, Linux, and Postgres, job complete, right?

Thankfully, tech debt never quite hits 0. While updating and replacing core technologies on a schedule is no small feat, replacing a rusty part with a shiny one doesn’t change a design. Architectural tech debt -- mistakes in abstractions, including the lack thereof -- can present an even greater challenge. Solutions to those problems don’t generalize between projects as cleanly, but they do benefit from up-to-date and error-free foundations.

For all the projects looking to add tread to their technical tires, we hope this retrospective helps you confidently and pragmatically retrofit your stack for years to come.

Finally, big thanks to Uvik for the talent connection, and the talent: Yaroslav, Serhii, and Oleh. Shoutouts to Kurt, Justin, and Chris, my fellow leads. And the cheers to business leadership at SimpleLegal and everywhere, for seeing the value in maintainability.

17 Mar 05:23

Bitcoin mine cargo container literally incinerating planet

by jwz
Their argument is, "Bitcoin is 'green' because this oil well was just going to vent all that methane into the atmosphere anyway" -- or -- "It was on fire when I got here."

'Absurd' video of bitcoin mine hooked to an oil well sparks outrage

In states like Texas, where energy regulations are laxer, natural gas by-product can be vented, intentionally releasing gases, predominately potent methane, into the atmosphere. The other option is to set gas on fire in flare stacks to convert methane to carbon dioxide, slightly less dangerous when it comes to heating the planet in the short-term. [...]

"They're getting zero for this gas anyway so it makes almost no difference whether we're on that well-site or not," he said. [...] "We've had publicly-traded companies reach out to us and say, 'We don't even care if we lose money on this. We want to improve our public opinion." [...]

Alex de Vries, founder of Digiconomist which examines consequences of new technologies, and a data scientist focusing on financial economic crime for De Nederlandsche Bank, called the oil well-bitcoin mining rig set-up "absurd".

"It's mind-blowing the suggestion that it is somehow helping the environment to use a by-product of fossil fuel extraction for bitcoin mining," he told The Independent.

"We don't have a climate change problem because fossil fuel companies are not efficient enough. And if you make the operation more efficient, you are not helping the environment anyway. Intuitively it just doesn't make sense.

"Firstly, it's adding to the bottom line of fossil fuel extraction and secondly, it's still burning fossil fuels. We want to accelerate away from fossil fuels. We don't want to make fossil fuels more profitable. I can't wrap my head around it."

Previously, previously, previously, previously, previously.

02 Mar 09:15

OSM In Realtime

by slaporte

love it

18 Feb 08:50

"Was Google’s decision to kill Google Reader actually the key turning point in the destruction..."

“Was Google’s decision to kill Google Reader actually the key turning point in the destruction of western civilization? Kills the decentralized web, gives rise to Twitter and Facebook becoming the algorithmic overlords. Maybe…”

- Vinay Gupta
30 Jan 04:45

A New Ethical Open Source License for Machine Learning from ml5.js? – Michael Weinberg

by slaporte

any other decaying licenses you've seen?