Shared posts

12 Aug 20:19

Science décalée : le smartphone transforme la main humaine

by (Futura-Sciences)
D’après une étude britannique, 5 % de la population aurait un pouce plus gros que l'autre à cause de l'usage du smartphone. Les jeunes sont les plus touchés par cette transformation du corps plutôt rapide...
19 Apr 18:58

Go Range Loop Internals

While they are very convenient, I always found Go’s range loops a bit mystifying. I’m not alone in this: Today’s #golang gotcha: the two-value range over an array does a copy. Avoid by ranging over the pointer instead.
10 Apr 07:38

Avoid Design by Committee: Generate Apart, Evaluate Together

by Lorenzo Pasqualis

This post was first published in its entirety on CoderHood as Avoid Design by Committee: Generate Apart, Evaluate Together. CoderHood is a blog dedicated to the human dimension of software engineering.

The tech industry thrives on innovation. Building innovative software products requires constant design and architecture of creative solutions. In that context, I am not a fan of design by committee; in fact, I believe that it is more of a disease than a strategy. It afflicts teams with no leadership or unclear leadership; the process is painful, and the results are abysmal.

Usability issues plague software products designed by committee. Such products look like collections of features stuck together without a unifying vision or a unique feeling; they are like onions, built as a series of loosely connected layers. They cannot bring emotions to the user because emotions are the product of individual passion.

Decisions and design by committee feel safe because of the sense of shared accountability, which is no accountability at all. A group's desire to strive for unanimity overrides the motivation to consider and debate alternative views. It reduces creativity and ideation to an exercise of consensus building.

Building consensus is the act of pushing a group of people to think alike. On that topic, Walter Lippmann famously said:

When all think alike, then no one is thinking.

In other words, building consensus is urging people to talk until they stop thinking and until they finally agree. Not many great ideas or designs come out that way.

I am a believer that the seeds of great ideas are generated as part of a creative process that is unique to each person. If you force a process where all designs must be produced and evaluated on the spot in a group setting, you are not giving introverted creative minds the time to enter their best focused and creative state. On the other hand, if you force all ideas to be generated in isolation without group brainstorming, you miss the opportunity to harvest collective wisdom.

That is why I like to implement a mixed model that I call "Generate apart, Evaluate Together." I am going to talk about this model in the context of software and architecture design, but you can easily expand it to anything that requires both individual creativity and group wisdom. Like most models, I am not suggesting that you should apply it to every design or decision. It is not a silver bullet. Look at it as another tool in your toolbelt. I recommend it mostly for medium to large undertakings, significant design challenges, medium to large shifts and foundational architecture choices.

Generate Apart, Evaluate Together

I believe in the power of a creative process that starts with one or more group-brainstorms designed to frame a problem, evaluate requirements and bring-out and evaluate as many initial thoughts as possible. In that process, a technical leader guides the group to discuss viable alternatives but keeps the discussion at a "box and arrows" high-level. Imagine a data flow diagram on a whiteboard or a mindmap of the most important concepts. Groups of people discussing design and architecture should not get bogged down in the weeds.

At some point during that high-level ideation session, the group selects one person to be an idea/proposal generation driver. After the group session is over, that person is in charge of doing research, create, document and eventually present more ideas and detailed solution options. If the problem-space is vast, the group might select several drivers, one for each of the different areas.

The proposal generation driver doesn't have to do all the work in isolation. He or she can collaborate with others to come up with proposals, but he or she needs to lead the conversation and idea generation.

After the driver generated data, ideas, and proposals, the group meets again to brainstorm, evaluate and refine the material. At that point, the group might identify new drivers (or maintain the old ones), and the process repeats iteratively.

As mentioned, I refer to this iterative process with the catchphrase, "Generate Apart, Evaluate Together." Please, do not interpret that too literally and as an absolute. First of all, there is idea generation happening as a group, but it is mostly at a high-level. Also, there is an evaluation of ideas that happens outside of the group. "Generate apart, evaluate together" is a reminder to avoid design by committee, in-the-weeds group meetings, and isolated evaluation and selection of all ideas and solutions.

I synthesized the process with this quick back-of-the-napkin sequence diagram:


The "evaluate" part of "Generate apart, evaluate together" refers to one of the two different phases of evaluation mentioned above.

The Initial Evaluation is a group assessment of the business requirements followed by one or more brainstorms. The primary goal is to frame the problem and create an initial list of possible high-level architectural and technical choices and issues. The secondary objective is to select a driver for idea generation.

The Iterative Evaluation is the periodic group evaluation and refinement of ideas and proposals generated by the driver in non-group settings.

Some of the activities executed during both types of evaluation are:

  • Brainstorm business requirements and solutions.
  • Identification of the proposal generation driver or drivers.
  • Identification of next steps.
  • Review proposals.
  • Debate pros and cons of proposed solutions.
  • Choose a high-level technical direction.
  • Argue.
  • Agree and commit or disagree and commit.
  • Establish communication strategy.
  • Estimate.
  • Report-in, meaning the driver reports and presents ideas and recommendations to the group.

Note that there is no attempt to generate consensus. The group evaluates and commits to a direction, regardless if there is unanimous agreement or not. Strong leadership is required to make this work.


The "generate" part of "Generate apart, evaluate together" refers to the solitary or small group activities of an individual who works to drive the creation of proposals and detailed solutions to present to the larger group. Such activities include:

  • Study problems
  • Identify research strands and do research.
  • Collect information.
  • Identify solutions.
  • Write proposals, documentation, user stories.
  • Estimate user stories.
  • Write/test/maintain code.
  • Experiment.
  • Report-out (meaning the driver reports to the group the results of research, investigation, etc.)


For future reference, you can remember the main ideas of this method with a MindMap:

Final Thoughts

I didn't invent the catchphrase "Generate apart, evaluate together," but I am not sure who did, where I heard it, or what was the original intent. I tried to find out, but Google didn't help much. Regardless, in this post, I described how I use it, what it means to me, and what kinds of problems it resolves. I used this model for a long time, and I only recently gave it a tagline that sticks to mind and can be used to describe the method without too much necessary explanation.

If you enjoyed this article, keep in touch!

09 Apr 18:58

Obsolescence programmée : le procès à charge contre Epson élude le vrai problème

by Auteur invité

Par Grégoire Dubost.

Poursuivi en justice pour « obsolescence programmée », Epson a d’ores et déjà été condamné par un tribunal médiatique. Dans un numéro d’Envoyé spécial diffusé sur France 2 le 29 mars 2018, les journalistes Anne-Charlotte Hinet et Swanny Thiébaut ont repris avec une étonnante crédulité les accusations assénées par l’association HOP (Halte à l’obsolescence programmée), à l’origine d’une plainte déposée l’année dernière contre l’industriel japonais.

Des constats troublants

Certains constats rapportés dans cette enquête s’avèrent effectivement troublants : on y découvre qu’une imprimante prétendument inutilisable peut tout à fait sortir plusieurs pages après l’installation d’un pilote pirate ; quant aux cartouches, elles semblent loin d’être vides quand l’utilisateur est appelé à les changer. La preuve n’est-elle pas ainsi faite qu’un complot est ourdi contre des consommateurs aussi malheureux qu’impuissants ?

La musique accompagnant ce reportage, angoissante, laisse entendre qu’un danger planerait sur ceux qui se risqueraient à le dénoncer. On tremble à l’évocation de cet homme qui « a fini par briser le silence » ; quel aura été le prix de sa témérité ? Un passionné d’électronique susceptible d’assister les enquêteurs vit en marge de la société, reclus loin des villes et de leurs menaces : « il a fallu avaler quelques kilomètres en rase campagne pour trouver un expert », racontent les journalistes. Un expert censé décrypter le contenu d’une puce électronique à l’aide d’un tournevis… Succès garanti.

Les imprimantes incriminées sont vendues autour d’une cinquantaine d’euros. « Le fabricant n’a pas intérêt à avoir des imprimantes de ce prix-là qui soient réparables, sinon il n’en vendra plus », croit savoir l’un des témoins interrogés. Étrange conviction : si un fabricant était en mesure de prolonger la durée de vie de ses produits et d’étendre leur garantie en conséquence, sans en renchérir le coût ni en compromettre les fonctionnalités, n’aurait-il pas intérêt à le faire, dans l’espoir de gagner des parts de marché aux dépens de ses concurrents ?

Vendre de l’encre très chère

De toute façon, pour Epson, l’activité la plus lucrative n’est pas là : « autant vendre une imprimante pas chère pour vendre ensuite de l’encre très chère », souligne une autre personne interrogée, cette fois-ci bien inspirée. Il est vrai qu’au fil du temps, les fabricants de cartouches génériques parviennent à contourner les verrous mis en place par Epson et ses homologues pour s’arroger d’éphémères monopoles  sur le marché des consommables.

Cela étant, même s’il préfère acheter de l’encre à moindre coût, le possesseur d’une imprimante Epson fonctionnelle sera toujours un acheteur potentiel des cartouches de la marque ; ce qu’il ne sera plus, assurément, quand son imprimante sera tombée en panne… Ce constat, frappé du sceau du bon sens, semble avoir échappé aux enquêteurs, qui prétendent pourtant que « les fabricants font tout pour vous faire acheter leurs propres cartouches ». Peut-être cela pourrait-il expliquer l’obligation de changer une cartouche pour utiliser le scanner d’une imprimante multi-fonctions… Mais quelle garantie Epson aurait-il que ses clients lui restent fidèles au moment de renouveler leur matériel dont il aurait lui-même programmé l’obsolescence ? Ils le seront d’autant moins s’ils ont été déçus par leur achat – notamment s’ils jugent que leur imprimante les a lâchés prématurément.

Pourquoi ces aberrations ?

La pertinence d’une stratégie d’obsolescence programmée est donc sujette à caution. Comment, dès lors, expliquer certaines aberrations ? Epson se montre peu prolixe à ce sujet ; sa communication s’avère même calamiteuse ! « Une imprimante est un produit sophistiqué », affirme-t-il dans un communiqué. Pas tellement en fait. Du moins l’électronique embarquée dans un tel appareil n’est-elle pas des plus élaborée. Assistant au dépeçage d’une cartouche, les journalistes ont mimé l’étonnement à la découverte de sa puce : « Surprise ! […] Pas de circuit électrique, rien qui la relie au réservoir d’encre. Elle est juste collée. Comment diable cette puce peut-elle indiquer le niveau d’encre si elle n’est pas en contact avec l’encre ? » Que croyaient-ils trouver dans un consommable au recyclage notoirement aléatoire ? Ou dans un appareil vendu seulement quelques dizaines d’euros ? Quoi qu’en dise Epson, sans doute la consommation de l’encre et l’état du tampon absorbeur sont-ils évalués de façon approximative. Apparemment, le constructeur voit large, très large même ! C’est évidemment regrettable, mais qu’en est-il des alternatives ? Les concurrents d’Epson proposent-ils des solutions techniques plus efficaces sur des produits vendus à prix comparable ? Encore une question qui n’a pas été posée…

Concernant les cartouches, dont l’encre est en partie gaspillée, la malignité prêtée au constructeur reste à démontrer. On n’achète pas une cartouche d’encre comme on choisit une brique de lait ni comme on fait un plein d’essence. Si le volume d’encre qu’elle contient est bien mentionné sur l’emballage, cette information n’est pas particulièrement mise en valeur. D’une marque à l’autre, d’ailleurs, elle n’est pas la même ; elle ne constitue pas un repère ni un élément de comparaison. En pratique, on n’achète pas des millilitres d’encre, mais des cartouches de capacité dite standard, ou bien de haute capacité, avec la promesse qu’elles nous permettront d’imprimer un certain nombre de pages. Dans ces conditions, quel intérêt y aurait-il, pour un constructeur, à restreindre la proportion d’encre effectivement utilisée ? Autant réduire le volume présent dans les cartouches ! Pour le consommateur, cela reviendrait au même : il serait condamné à en acheter davantage ; pour l’industriel, en revanche, ce serait évidemment plus intéressant, puisqu’il aurait moins d’encre à produire pour alimenter un nombre identique de cartouches vendues au même prix. Un représentant d’Epson, filmé à son insu, a tenté de l’expliquer au cours du reportage, avec toutefois une extrême maladresse. Les enquêteurs n’ont pas manqué de s’en délecter, prenant un malin plaisir à mettre en scène la dénonciation d’un mensonge éhonté.

Des allégations sujettes à caution

Force est de constater qu’ils n’ont pas fait preuve du même zèle pour vérifier les allégations des militants qui les ont inspirés. Quand il juge nécessaire de changer le tampon absorbeur d’une imprimante, Epson en interdit l’usage au motif que l’encre risquerait de se répandre n’importe où. Parmi les utilisateurs d’un pilote pirate permettant de contourner ce blocage, « on a eu aucun cas de personnes qui nous écrivaient pour dire que cela avait débordé », rétorque la représentante de l’association HOP. Pourquoi les journalistes n’ont-ils pas tenté l’expérience de vider quelques cartouches supplémentaires dans ces conditions ? À défaut, peut-être auraient-ils pu arpenter la Toile à la recherche d’un éventuel témoignage. « Il y a quelques années j’ai dépanné une imprimante qui faisait de grosses traces à chaque impression, dont le propriétaire avait, un an auparavant, réinitialisé le compteur […] pour permettre de reprendre les impressions », raconte un internaute, TeoB, dans un commentaire publié le 19 janvier sur le site ; « le tampon était noyé d’encre qui avait débordé et qui tapissait tout le fond de l’imprimante », précise-t-il ; « ça m’a pris quelques heures pour tout remettre en état, plus une nuit de séchage », se souvient-il. Dans le cas présent, ce qui passe pour de l’obsolescence programmée pourrait relever en fait de la maintenance préventive… Les journalistes l’ont eux-mêmes rapporté au cours de leur reportage : Epson assure remplacer gratuitement ce fameux tampon ; pourquoi ne l’ont-ils pas sollicité pour évaluer le service proposé ?

Ils ont préféré cautionner l’idée selon laquelle une imprimante affectée par un consommable réputé en fin de vie – à tort ou à raison – devrait être promise à la casse. La confusion à ce sujet est entretenue au cours du reportage par un technicien présenté comme un « spécialiste de l’encre et de la panne ». Les militants de l’association HOP témoignent en cela d’une inconséquence patente : ils pourraient déplorer le discours sibyllin des constructeurs d’imprimantes, dont les manuels d’utilisation ou les messages à l’écran ne semblent pas faire mention des opérations de maintenance gracieuses promises par ailleurs ; mais ils préfèrent entretenir le mythe d’un sabotage délibéré de leurs produits, confortant paradoxalement leurs utilisateurs dans la conviction qu’ils seraient irréparables… Si la balle se trouve parfois dans le camp des industriels, ceux-ci ne manquent pas de la renvoyer aux consommateurs ; encore faut-il que ces derniers s’en saisissent, plutôt que de fuir leurs responsabilités éludées par une théorie complotiste.

Liens :

L’article Obsolescence programmée : le procès à charge contre Epson élude le vrai problème est apparu en premier sur Contrepoints.

09 Apr 18:54

What kind of managers will Generation Z bring?

They're smart, conscientious, ambitious, and forward-thinking. And of course, they're the first true digital natives. Growing up in a world where information changes by the second, Generation Z — the generation born roughly after 1995 — is ready to take the workplace by storm.

As the group that follows the millennials, Generation Z — or "Gen Z" as they are often called — is changing the way we do business. They see the world through a different lens. Their passions and inspirations will define what they do for a career, and consequently drive many of their decisions about how they see themselves in the workplace. These are the managers, big thinkers, and entrepreneurs of tomorrow. What will America's businesses look like with them at the helm?

They'll encourage risk-taking

Laura Handrick, an HR and workplace analyst with, believes the workers that will come from Generation Z are not limited by a fear of not knowing. "They recognize that everything they need or want to know is available from the internet, their friends, or crowdsourcing — so, they don't fear the unknown," she says.

Gen Z-ers know they can figure anything out if they have the right people, experts, and thought leaders around them. That's why Handrick believes they are more likely to try new things — and expect their peers and coworkers to do so, too. This could manifest itself as a kind of entrepreneurial spirit we haven't seen in years.

But they won't be reckless

While Gen Z is likely to encourage new ideas, most of the risks they'll be willing to take will be calculated ones. None of this "move fast and break things" mindset that has so driven today's entrepreneurs. After all, many of them grew up during the financial crisis of the mid- to late-2000s, and they know what happens when you price yourself out of your home, max out your credit cards, or leave college with $30,000 in student loans. Many were raised in homes that were foreclosed, or had parents who declared bankruptcy. Because of this, most of them will keep a close eye on the financial side of a business.

"Think of them as the bright eyes and vision of millennials with the financial-savvy of boomers, having lived through the financial crash of 2008," says Morgan Chaney, head of marketing for Blueboard, an HR tech startup.

Indeed, according to a survey done by, 70 percent of Generation Z-ers say they're most motivated by money, compared to 63 percent of millennials. In other words, putting a Gen Z-er in charge of your business is not going to make it go "belly up." They'll be more conservative with cash and very wary of bad business investments. This generation is unique in its understanding of money and how it works.

Get ready for more inclusive workplaces

Chaney believes Gen Z managers will be empathetic, hands-on, and purpose-driven. Since they believe in working one-on-one to ensure each employee is nurtured and has a purpose-driven career path, Generation Z is going to manage businesses with a "people-first" approach.

This means the workplaces they run will be less plagued by diversity problems than the generations that came before them. "They'll value each person for their experience, expertise, and uniqueness — race or gender won't be issues for them," says Handrick.

Gen Z-er Timothy Woods, co-founder of, which connects customers with experts in various fields, says his generation will be adaptive and empathetic managers, with a steely determination to succeed. "This generation is no longer only competing with the person in the next office, but instead with a world of individuals, each more informed and successful than the next, constantly broadcasting their achievements online and setting the bar over which one must now climb," he explains.

The 40-hour work week will disappear

Generation Z managers will also be very disciplined: Fifty-eight percent of those surveyed said they were willing to work nights and weekends, compared to 45 percent of millennials. Gone are the days of a working a typical 9-to-5 job. Generation Z managers will tolerate and likely expect non-traditional hours. "They'll have software apps to track productivity and won't feel a need to have people physically sitting in the same office to know that work is getting done," Handrick explains. "Video conference, texting, and mobile apps are how they'll manage their teams."

The catch? They have to feel like their work matters. According to the survey, "Gen Z stands out as the generation that most strongly believes work should have a greater purpose." Seventy-four percent of Generation Z-ers surveyed said they want their work to be about more than just money. That's compared to 45 percent of millennials, and the numbers fall even further for Gen-X and boomers.

So, yes, they'll work hard, but only if they know it's for a good reason. And that's a positive thing. In this way, the leaders of tomorrow will merge money and purpose to create a whole new way of doing business.

05 Apr 21:37

the secret life of NaN

The floating point standard defines a special value called Not-a-Number (NaN) which is used to represent, well, values that aren’t numbers. Double precision NaNs come with a payload of 51 bits which can be used for whatever you want– one especially fun hack is using the payload to represent all other non-floating point values and their types at runtime in dynamically typed languages.

%%%%%% update (04/2019) %%%%%%

I gave a lightning talk about the secret life of NaN at !!con West 2019– it has fewer details, but more jokes; you can find a recording here.


the non-secret life of NaN

When I say “NaN” and also “floating point”, I specifically mean the representations defined in IEEE 754-2008, the ubiquitous floating point standard. This standard was born in 1985 (with much drama!) out of a need for a canonical representation which would allow code to be portable by quelling the anarchy induced by the menagerie of inconsistent floating point representations used by different processors.

Floating point values are a discrete logarithmic-ish approximation to real numbers; below is a visualization of the points defined by a toy floating point like representation with 3 bits of exponent, 3 bits of mantissa (the image is from the paper “How do you compute the midpoint of an interval?”, which points out arithmetic artifacts that commonly show up in midpoint computations).

Since the NaN I’m talking about doesn’t exist outside of IEEE 754-2008, let’s briefly take a look at the spec.

An extremely brief overview of IEEE 754-2008

The standard defines these logarithmic-ish distributions of values with base-2 and base-10. For base-2, the standard defines representations for bit-widths for all powers of two between 16 bits wide and 256 bits wide; for base-10 it defines representations for bit-widths for all powers of two between 32 bits wide and 128 bits wide. (Well, almost. For the exact spec check out page 13 spec). These are the only standardized bitwidths, meaning, if a processor supports 32 bit floating point values, then it’s highly likely it will support it in the standard compliant representation.

Speaking of which, let’s take a look at what the standard compliant representation is. Let’s look at binary16, the base-2 16 bit wide format:

1 sign bit | 5 exponent bits | 10 mantissa bits
S            E E E E E         M M M M M M M M M M

I won’t explain how these are used to represent numeric values because I’ve got different fish to fry, but if you do want an explanation, I quite like these nice walkthroughs.

Briefly, though, here are some examples: the take-away is you can use these 16 bits to encode a variety of values.

0 01111 0000000000 = 1
0 00000 0000000000 = +0
1 00000 0000000000 = -0
1 01101 0101010101 = -0.333251953125

Cool, so we can represent some finite, discrete collection of real numbers. That’s what you want from your numeric representation most of the time.

More interestingly, though, the standard also defines some special values: ±infinity, and “quiet” & “signaling” NaN. ±infinity are self-explanatory overflow behaviors: in the visualization above, ±15 are the largest magnitude values which can be precisely represented, and computations with values whose magnitudes are larger than 15 may overflow to ±infinity. The spec provides guidance on when operations should return ±infinity based on different rounding modes.

What IEEE 754-2008 says about NaNs

First of all, let’s see how NaNs are represented, and then we’ll straighten out this “quiet” vs “signaling” business.

The standard reads (page 35, §6.2.1)

All binary NaN bit strings have all the bits of the biased exponent field E set to 1 (see 3.4). A quiet NaN bit string should be encoded with the first bit (d1) of the trailing significand field T being 1. A signaling NaN bit string should be encoded with the first bit of the trailing significand field being 0.

For example, in the binary16 format, NaNs are specified by the bit patterns:

s 11111 1xxxxxxxxxx = quiet     (qNaN)
s 11111 0xxxxxxxxxx = signaling (sNaN) **

Notice that this is a large collection of bit patterns! Even ignoring the sign bit, there are 2^(number mantissa bits - 1) bit patterns which all encoded a NaN! We’ll refer to these leftover bits as the payload. **: a slight complication: in the sNaN case, at least one of the mantissa bits must be set; it cannot have an all zero payload because the bit pattern with a fully set exponent and fully zeroed out mantissa encodes infinity.

It seems strange to me that the bit which signifies whether or not the NaN is signaling is the top bit of the mantissa rather than the sign bit; perhaps something about how floating point pipelines are implemented makes it less natural to use the sign bit to decide whether or not to raise a signal.

Modern commodity hardware commonly uses 64 bit floats; the double-precision format has 52 bits for the mantissa, which means there are 51 bits available for the payload.

Okay, now let’s see the difference between “quiet” and “signaling” NaNs (page 34, §6.2):

Signaling NaNs afford representations for uninitialized variables and arithmetic-like enhancements (such as complex-affine infinities or extremely wide range) that are not in the scope of this standard. Quiet NaNs should, by means left to the implementer’s discretion, afford retrospective diagnostic information inherited from invalid or unavailable data and results. To facilitate propagation of diagnostic information contained in NaNs, as much of that information as possible should be preserved in NaN results of operations.

Under default exception handling, any operation signaling an invalid operation exception and for which a floating-point result is to be delivered shall deliver a quiet NaN.

So “signaling” NaNs may raise an exception; the standard is agnostic to whether floating point is implemented in hardware or software so it doesn’t really say what this exception is. In hardware this might translate to the floating point unit setting an exception flag, or for instance, the C standard defines and requires the SIGFPE signal to represent floating point computational exceptions.

So, that last quoted sentence says that an operation which receives a signaling NaN can raise the alarm, then quiet the NaN and propagate it along. Why might an operation receive a signaling NaN? Well, that’s what the first quoted sentence explains: you might want to represent uninitialized variables with a signaling NaN so that if anyone ever tries to perform an operation on that value (without having first initialized it) they will be signaled that that was likely not what they wanted to do.

Conversely, “quiet” NaNs are your garden variety NaN– qNaNs are what are produced when the result of an operation is genuinely not a number, like attempting to take the square root of a negative number. The really valuable thing to notice here is the sentence:

To facilitate propagation of diagnostic information contained in NaNs, as much of that information as possible should be preserved in NaN results of operations.

This means the official suggestion in the floating point standard is to leave a qNaN exactly as you found it, in case someone is using it propagate “diagnostic information” using that payload we saw above. Is this an invitation to jerryrig extra information into NaNs? You bet it is!

What can we do with the payload?

This is really the question I’m interested in; or, rather, the slight refinement: what have people done with the payload?

The most satisfying answer that I found to this question is, people have used the NaN payload to pass around data & type information in dynamically typed languages, including implementations in Lua and Javascript. Why dynamically typed languages? Because if your language is dynamically typed, then the type of a variable can change at runtime, which means you absolutely must also pass around some type information; the NaN payload is an opportunity to store both that type information and the actual value. We’ll take a look at one of these implementations in detail in just a moment.

I tried to track down other uses but didn’t find much else; this textbook has some suggestions (page 86):

One possibility might be to use NaNs as symbols in a symbolic expression parser. Another would be to use NaNs as missing data values and the payload to indicate a source for the missing data or its class.

The author probably had something specific in mind, but I couldn’t track down any implementations which used NaN payloads for symbols or a source indication for missing data. If anyone knows of other uses of the NaN payload in the wild, I’d love to hear about them!

Okay, let’s look at how JavaScriptCore uses the payload to store type information:

Payload in Practice! A look at JavaScriptCore

We’re going to look at an implementation of a technique called NaN-boxing. Under NaN-boxing, all values in the language & their type tags are represented in 64 bits! Valid double-precision floats are left to their IEEE 754 representations, but all of that leftover space in the payload of NaNs is used to store every other value in the language, as well as a tag to signify what the type of the payload is. It’s like instead of saying “not a number” we’re saying “not a double precision float”, but rather a “<some other type>”.

We’re going to look at how JavaScriptCore (JSC) uses NaN-boxing, but JSC isn’t the only real-world industry-grade implementation that stores other types in NaNs. For example, Mozilla’s SpiderMonkey JavaScript implementation also uses NaN-boxing (which they call nun-boxing & pun-boxing), as does LuaJIT, which they call NaN-tagging. The reason I want to look at JSC’s code is it has a really great comment explaining their implementation.

JSC is the JavaScript implementation that powers WebKit, which runs Safari and Adobe’s Creative Suite. As far as I can tell, the code we’re going to look at is actually currently being used in Safari- as of March 2018, the file had last been modified 18 days ago.

NaN-Boxing explained in a comment

Here is the file we’re going to look at. The way NaN-boxing works is when you have non-float datatypes (pointers, integers, booleans) you store them in the payload, and use the top bits to encode the type of the payload. In the case of double-precision floats, we have 51 bits of payload which means we can store anything that fits in those 51 bits. Notably we can store 32 bit integers, and 48 bit pointers (the current x86-64 pointer bit-width). This means that we can store every value in the language in 64 bits.

Sidenote: according to the ECMAScript standard, JavaScript doesn’t have a primitive integer datatype- it’s all double-precision floats. So why would a JS implementation want to represent integers? One good reason is integer operations are so much faster in hardware, and many of the numeric values used in programs really are ints. A notable example is an index variable in a for-loop which walks over an array. Also according to the ECMAScript spec, arrays can only have 2^32 elements so it is actually safe to store array index variables as 32-bit ints in NaN payloads.

The encoding they use is:

* The top 16-bits denote the type of the encoded JSValue:
*     Pointer {  0000:PPPP:PPPP:PPPP
*              / 0001:****:****:****
*     Double  {         ...
*              \ FFFE:****:****:****
*     Integer {  FFFF:0000:IIII:IIII
* The scheme we have implemented encodes double precision values by performing a
* 64-bit integer addition of the value 2^48 to the number. After this manipulation
* no encoded double-precision value will begin with the pattern 0x0000 or 0xFFFF.
* Values must be decoded by reversing this operation before subsequent floating point
* operations may be peformed.

So this comment explains that different value ranges are used to represent different types of objects. But notice that these bit-ranges don’t match those defined in IEEE-754; for instance, in the standard for double precision values:

a valid qNaN:
1 sign bit | 11 exponent bits | 52 mantissa bits
1 | 1 1 1 1 1 1 1 1 1 1 1 | 1 + {51 bits of payload}

chunked into bytes this is:
1 1 1 1 | 1 1 1 1 | 1 1 1 1 | 1 + {51 bits of payload}

which represents all the bit patterns in the range:
0x F F F F ...
0x F F F 8 ...

This means that according to the standard, the bit-ranges usually represented by valid doubles vs. qNaNs are:

         / 0000:****:****:****
Double  {        ...
         \ FFF7:****:****:****
         / FFF8:****:****:****
qNaN    {        ...
         \ FFFF:****:****:****

So what the comment in the code is showing us is that the ranges they’re representing are shifted from what’s defined in the standard. The reason they’re doing this is to favor pointers: because pointers occupy the range with the top two bytes zeroed, you can manipulate pointers without applying a mask. The effect is that pointers aren’t “boxed”, while all other values are. This choice to favor pointers isn’t obvious; the SpiderMonkey implementation doesn’t shift the range, thus favoring doubles.

Okay, so I think the easiest way to see what’s up with this range shifting business is by looking at the mask lower down in that file:

// This value is 2^48, used to encode doubles such that the encoded value will begin
// with a 16-bit pattern within the range 0x0001..0xFFFE.
#define DoubleEncodeOffset 0x1000000000000ll

This offset is used in the asDouble() function:

 inline double JSValue::asDouble() const
    return reinterpretInt64ToDouble(u.asInt64 - DoubleEncodeOffset);

This shifts the encoded double into the normal range of bit patterns defined by the standard. Conversely, the asCell() function (I believe in JSC “cells” and “pointers” are roughly interchangeable terms) can just grab the pointer directly without shifting:

 ALWAYS_INLINE JSCell* JSValue::asCell() const
    return u.ptr;

Cool. That’s actually basically it. Below I’ll mention a few more fun tidbits from the JSC implementation, but this is really the heart of the NaN-boxing implementation.

What about all the other values?

The part of the comment that said that if the top two bytes are 0, then the payload is a pointer was lying. Or, okay, over-simplified. JSC reserves specific, invalid, pointer values to denote immediates required by the ECMAScript standard: boolean, undefined & null:

*     False:     0x06
*     True:      0x07
*     Undefined: 0x0a   
*     Null:      0x02

These all have the second bit set to make it easy to test whether the value is any of these immediates.

They also represent 2 immediates not required by the standard: ValueEmpty at 0x00, which are used to represent holes in arrays, & ValueDeleted at 0x04, which are used to mark deleted values.

And finally, they also represent pointers into Wasm at 0x03.

So, putting it all together, a complete picture of the bit pattern encodings in JSC is:

*     ValEmpty  {  0000:0000:0000:0000
*     Null      {  0000:0000:0000:0002
*     Wasm      {  0000:0000:0000:0003
*     ValDeltd  {  0000:0000:0000:0004   
*     False     {  0000:0000:0000:0006
*     True      {  0000:0000:0000:0007
*     Undefined {  0000:0000:0000:000a    
*     Pointer   {  0000:PPPP:PPPP:PPPP
*                / 0001:****:****:****
*     Double    {         ...
*                \ FFFE:****:****:****
*     Integer   {  FFFF:0000:IIII:IIII


  1. The floating point spec leaves a lot of room for NaN payloads. It does this intentionally.
  2. What are these payloads used for in real life? Mostly, I don’t know what they’re used for. If you know of other real world uses, I’d love to hear from you.
  3. One use is NaN-boxing, which is where you stick all the other non-floating point values in a language + their type information into the payload of NaNs. It’s a beautiful hack.

Appendix: to NaNbox or not to NaNbox

Looking at this implementation begs the question, is NaN-boxing a good idea or a bizzaro hack? As someone who isn’t implementing or maintaining a dynamically typed language, I’m not well-posed to answer that question. There are a lot of different approaches which surely all have nuanced tradeoffs that show up depending on the use-cases of your language. With that caveat, here’s a rough sketch of what some pros & cons are. Pros: saves memory, all values fit in registers, bit masks are fast to apply; Cons: have to box & unbox almost all values, implementation becomes harder, and validation bugs can be serious security vulnerabilities.

For a better discussion of NaN-boxing tradeoffs from someone who does implement & maintain a dynamically typed language check out this article.

Apart from performance, there is this writeup and this other writeup of vulnerabilities discovered in JSC. Whether these vulnerabilities would have been preventable if JSC had used a different approach for storing type information is a moot point, but there is at least one vulnerability that seems like it would have been prevented:

This way we control all 8 bytes of the structure, but there are other limitations (Some floating-point normalization crap does not allow for truly arbitrary values to be written. Otherwise, you would be able to craft a CellTag and set pointer to an arbitrary value, that would be horrible. Interestingly, before it did allow that, which is what the very first Vita WebKit exploit used! CVE-2010-1807).

If you want to know way more about JSC’s memory model there is also this very in depth article.

08 Mar 07:57

Nouveau record du monde de marche pieds nus sur Lego


On est d’accord, marcher sur un Lego sans le faire exprès est l’une des pires sensations du monde (en le faisant exprès aussi d’ailleurs). Et bien comme à peu près toute « discipline » dans ce bas monde il existe un record du monde de marche pieds nus sur lego enregistré au Guiness World Records. Même qu’un mec vient de le battre.

Quand record du monde rime avec douleur

Tyler, membre de la chaîne Youtube Dude Perfect a parcouru 44,7 mètres de briques Lego pieds nus pour l’émission « Absurd Record ». Record qui a été, bien évidemment, validé par le Guiness World Records qui était, bien évidemment, aussi sur place. Sur la vidéo ci dessous, vous verrez un homme courageux, en pleine souffrance. Tu as tout notre respect Tyler.

Pour la petite anecdote, le meilleur score précédent était de 24,9 mètres. Pour ceux qui sont vraiment très mauvais en calcul mental, Tyler a quand même fait presque 20 mètres de plus que son prédécesseur…

L’article Nouveau record du monde de marche pieds nus sur Lego est apparu en premier sur GOLEM13.FR.

04 Mar 17:11

Why every user agent string start with "Mozilla"

by /u/psychologicalX
04 Mar 10:14

US Border Patrol Hasn’t Validated E-Passport Data For Years

by Lily Hay Newman
For over a decade, US Customs and Border Protection has been unable to verify the cryptographic signatures on e-Passports, because they never installed the right software.
03 Mar 19:57

Les cartes bancaires sans contact et la confidentialité des données

by Numendil

Le paiement sans contact est une fonction disponible sur plus de 60% des cartes bancaires en circulation. Les données bancaires étant des éléments sensibles, elles doivent naturellement être protégées.

Est-ce vraiment le cas ?

Evolution du paiement sans contact

Cette fonctionnalité est apparue en France aux alentours de 2012. Depuis, elle n’a cessé de se développer. Selon le GIE Cartes bancaires, 44,9 millions de cartes bancaires sans contact étaient en circulation en septembre 2017, soit 68% du parc français.

données GIE Bancaire sur l'usage des cartes sans contact
(source : GIE Cartes Bancaires)

Dans son bilan 2016 (PDF, page 11), ce même GIE déclare que 605 millions de paiements ont été réalisés via du sans contact. Si ce chiffre semble énorme, l’évolution de ce dernier l’est encore plus : +158% de paiements par rapport à 2015, et la tendance ne faiblit pas.

Le paiement sans contact est fait pour des petites transactions, celles de « la vie quotidienne », le montant des échanges étant plafonné à maximum 30€ depuis octobre 2017.

Fonctionnement du paiement sans contact

Le principe est relativement simple, la personne détentrice d’une carte sans contact souhaite payer sa transaction (inférieure à 30€ donc), elle pose sa carte à quelques centimètres du terminal de paiement sans contact et « paf », c’est réglé.

Le paiement sans contact est basé sur la technologie NFC, ou Near Field Communication (communication en champ proche) via une puce et un circuit faisant office d’antenne, intégrés à la carte bancaire.

Le NFC est caractérisé par sa distance de communication, qui ne dépasse pas 10 cm avec du matériel conventionnel. Les fréquences utilisées par les cartes sans contact sont de l’ordre de la haute fréquence (13,56 MHz) et peuvent utiliser des protocoles de chiffrement et d’authentification. Le pass Navigo, les récents permis de conduire ou certains titres d’identité récents utilisent par exemple de la NFC.

Si la technique vous intéresse, je vous invite à lire en détail les normes ISO-14443A standard et la norme ISO 7816, partie 4.

Paiement sans contact et données personnelles

On va résumer simplement le problème : il n’y a pas de phase d’authentification ni de chiffrement total des données. En clair, cela signifie que des informations relativement sensibles se promènent, en clair, sur un morceau de plastique.

De nombreuses démonstrations existent çà et là, vous pouvez également trouver des applications pour mobile qui vous permettent de récupérer les informations non chiffrées (votre téléphone doit être compatible NFC pour réaliser l’opération).

exemple application lecture carte bancaire

Pour réaliser l’opération, avec du matériel conventionnel, il faut être maximum à quelques centimètres de la carte sans contact, ce qui limite fortement le potentiel d’attaque et interdit, de fait, une « industrialisation » de ces dernières.

Cependant, avec du matériel plus précis, plus puissant et plus onéreux, il est possible de récupérer les données de la carte jusqu’à 1,5 mètre et même plus avec du matériel spécifique et encore plus onéreux (il est question d’une portée d’environ 15 mètres avec ce genre de matériel). Un attaquant doté de ce type d’équipement peut récupérer une liste assez impressionnante de cartes, puisqu’elles sont de plus en plus présentes… problématique non ?

En 2012, le constat était plus alarmant qu’aujourd’hui, puisqu’il était possible de récupérer le nom du détenteur de la carte, son numéro de carte, sa date d’expiration, l’historique de ses transactions et les données de la bande magnétique de la carte bancaire.

En 2017… il est toujours possible de récupérer le numéro de la carte, la date d’expiration de cette dernière et, parfois, l’historique des transactions, mais nous y reviendrons.

Que dit la CNIL sur le sujet ?

J’ai demandé à la CNIL s’il fallait considérer le numéro de carte bancaire comme étant une donnée à caractère personnel, sans réponse pour le moment. J’éditerai cet article lorsque la réponse arrivera.

Si le numéro de carte bancaire est une donnée à caractère personnel, alors le fait qu’il soit disponible, et stocké en clair, me semble problématique, cela ne semble pas vraiment respecter la loi informatique et libertés.

En 2013, cette même CNIL a émis des recommandations à destination des organismes bancaires, en rappelant par exemple l’article 32 et l’article 38 de la loi informatique et libertés. Les porteurs de carte doivent, entre autres, être informés de la présence du sans contact et doivent pouvoir refuser cette technologie.

Les paiements sans contact sont appréciés des utilisateurs car ils sont simples, il suffit de passer sa carte sur le lecteur. Ils sont préférés aux paiements en liquide et certains vont même jusqu’à déclarer que « le liquide finira par disparaître dans quelques années ». Son usage massif fait que votre organisme bancaire vous connaît mieux, il peut maintenant voir les paiements qui lui échappaient avant, lorsque ces derniers étaient en liquide.

La CNIL s’est également alarmée, dès 2012, des données transmises en clair par les cartes en circulation à l’époque. Ainsi, il n’est plus possible de lire le nom du porteur de la carte, ni, normalement, de récupérer l’historique des transactions… ce dernier point étant discutable dans la mesure où, pas plus tard que la semaine dernière, j’ai pu le faire avec une carte émise en 2014.

Comme expliqué précédemment, il est encore possible aujourd’hui de récupérer le numéro de carte ainsi que la date d’expiration de cette dernière.

Dans le scénario d’une attaque ciblée contre un individu, obtenir son nom n’est pas compliqué. Le CVV – les trois chiffres indiqués au dos de la carte – peut être forcé, il n’existe que 1000 combinaisons possibles, allant de 000 à 999.

Si la CNIL a constaté des améliorations, elle n’est pas rassurée pour autant. En 2013, elle invitait les acteurs du secteur bancaire à mettre à niveau leurs mesures de sécurité pour garantir que les données bancaires ne puissent pas être collectées ni exploitées par des tiers.

Elle espère que ce secteur suivra les différentes recommandations émises [PDF, page 3], notamment par l’Observatoire de la Sécurité des Cartes de Paiement, quant à la protection et au chiffrement des échanges. Les premières recommandations datent de 2007 [PDF], mais malheureusement, dix ans après, très peu de choses ont été entreprises pour protéger efficacement les données bancaires présentes dans les cartes sans contact.

S’il existe des techniques pour restreindre voire empêcher la récupération des données bancaires via le sans contact, le résultat n’est toujours pas satisfaisant, le numéro de carte est toujours stocké en clair et lisible aisément, les solutions ne garantissent ni un niveau de protection adéquat, ni une protection permanente.

Une solution consiste à « enfermer » sa carte dans un étui qui bloque les fréquences utilisées par le NFC. Tant que la carte est dans son étui, pas de risques… mais pour payer, il faut bien sortir ladite carte, donc problème.

L’autre solution, plus « directe », consiste à trouer – physiquement – sa carte au bon endroit pour mettre le circuit de la carte hors service. Attention cependant, votre carte bancaire n’est généralement pas votre propriété, vous louez cette dernière à votre banque, il est normalement interdit de détériorer le bien de votre banque.

DCP ou pas DCP ?

J’en parlais précédemment : est-ce que le numéro de carte bancaire constitue à lui seul une donnée à caractère personnel, ou DCP ?

Cela semble un point de détail mais je pense que c’est assez important en réalité. Si c’est effectivement une DCP, alors le numéro de carte bancaire doit, au même titre que les autres DCP, bénéficier d’un niveau de protection adéquat, exigence qui n’est actuellement pas satisfaite.

Si vous avez la réponse, n’hésitez pas à me contacter ou à me donner quelques références.

03 Mar 10:19

How to clean up space's rubbish dump

Tomorrow’s spacecraft will face a huge challenge
03 Mar 09:55

Chrome's WebUSB Feature Leaves Some Yubikeys Vulnerable to Attack

by Andy Greenberg
While still the best protection against phishing attacks, some Yubikey models are vulnerable after a recent update to Google Chrome.
02 Mar 18:14

Modern CSS Explained For Dinosaurs

by Peter Jang
Images from Dinosaur Comics by Ryan North

CSS is strangely considered both one of the easiest and one of the hardest languages to learn as a web developer. It’s certainly easy enough to get started with it — you define style properties and values to apply to specific elements, and…that’s pretty much all you need to get going! However, it gets tangled and complicated to organize CSS in a meaningful way for larger projects. Changing any line of CSS to style an element on one page often leads to unintended changes for elements on other pages.

In order to deal with the inherent complexity of CSS, all sorts of different best practices have been established. The problem is that there isn’t any strong consensus on which best practices are in fact the best, and many of them seem to completely contradict each other. If you’re trying to learn CSS for the first time, this can be disorienting to say the least.

The goal of this article is to provide a historical context of how CSS approaches and tooling have evolved to what they are today in 2018. By understanding this history, it will be easier to understand each approach and how to use them to your benefit. Let’s get started!

Update: I made a new video course version of this article, which goes over the material with greater depth, check it out here:

Using CSS for basic styling

Let’s start with a basic website using just a simple index.html file that links to a separate index.css file:

<!DOCTYPE html>
<html lang="en">
<meta charset="UTF-8">
<title>Modern CSS</title>
<link rel="stylesheet" href="index.css">
<header>This is the header.</header>
<h1>This is the main content.</h1>
<h4>This is the navigation section.</h4>
<h4>This is an aside section.</h4>
<footer>This is the footer.</footer>

Right now we aren’t using any classes or ids in the HTML, just semantic tags. Without any CSS, the website looks like this (using placeholder text):

Click here to see a live example

Functional, but not very pretty. We can add CSS to improve the basic typography in index.css:

/* BASIC TYPOGRAPHY                       */
/* from */
html {
font-size: 62.5%;
font-family: serif;
body {
font-size: 1.8rem;
line-height: 1.618;
max-width: 38em;
margin: auto;
color: #4a4a4a;
background-color: #f9f9f9;
padding: 13px;
@media (max-width: 684px) {
body {
font-size: 1.53rem;
@media (max-width: 382px) {
body {
font-size: 1.35rem;
h1, h2, h3, h4, h5, h6 {
line-height: 1.1;
font-family: Verdana, Geneva, sans-serif;
font-weight: 700;
overflow-wrap: break-word;
word-wrap: break-word;
-ms-word-break: break-all;
word-break: break-word;
-ms-hyphens: auto;
-moz-hyphens: auto;
-webkit-hyphens: auto;
hyphens: auto;
h1 {
font-size: 2.35em;
h2 {
font-size: 2em;
h3 {
font-size: 1.75em;
h4 {
font-size: 1.5em;
h5 {
font-size: 1.25em;
h6 {
font-size: 1em;

Here most of the CSS is styling the typography (fonts with sizes, line height, etc.), with some styling for the colors and a centered layout. You’d have to study design to know good values to choose for each of these properties (these styles are from sakura.css), but the CSS itself that’s being applied here isn’t too complicated to read. The result looks like this:

Click here to see a live example

What a difference! This is the promise of CSS — a simple way to add styles to a document, without requiring programming or complex logic. Unfortunately, things start to get hairier when we use CSS for more than just typography and colors (which we’ll tackle next).

Using CSS for layout

In the 1990s, before CSS gained wide adoption, there weren’t a lot of options to layout content on the page. HTML was originally designed as a language to create plain documents, not dynamic websites with sidebars, columns, etc. In those early days, layout was often done using HTML tables — the entire webpage would be within a table, which could be used to organize the content in rows and columns. This approach worked, but the downside was the tight coupling of content and presentation — if you wanted to change the layout of a site, it would require rewriting significant amounts of HTML.

Once CSS entered the scene, there was a strong push to keep content (written in the HTML) separate from presentation (written in the CSS). So people found ways to move all layout code out of HTML (no more tables) into CSS. It’s important to note that like HTML, CSS wasn’t really designed to layout content on a page either, so early attempts at this separation of concerns were difficult to achieve gracefully.

Let’s take a look at how this works in practice with our above example. Before we define any CSS layout, we’ll first reset any margins and paddings (which affect layout calculations) as well as give section distinct colors (not to make it pretty, but to make each section visually stand out when testing different layouts).

body {
margin: 0;
padding: 0;
max-width: inherit;
background: #fff;
color: #4a4a4a;
header, footer {
font-size: large;
text-align: center;
padding: 0.3em 0;
background-color: #4a4a4a;
color: #f9f9f9;
nav {
background: #eee;
main {
background: #f9f9f9;
aside {
background: #eee;

Now the website temporarily looks like:

Click here to see a live example

Now we’re ready to use CSS to layout the content on the page. We’ll look at three different approaches in chronological order, starting with the classic float-based layouts.

Float-based layout

The CSS float property was originally introduced to float an image inside a column of text on the left or right (something you often see in newspapers). Web developers in the early 2000s took advantage of the fact that you could float not just images, but any element, meaning you could create the illusion of rows and columns by floating entire divs of content. But again, floats weren’t designed for this purpose, so creating this illusion was difficult to pull off in a consistent fashion.

In 2006, A List Apart published the popular article In Search of the Holy Grail, which outlined a detailed and thorough approach to creating what was known as the Holy Grail layout — a header, three columns and a footer. It’s pretty crazy to think that what sounds like a fairly straightforward layout would be referred to as the Holy Grail, but that was indeed how hard it was to create consistent layout at the time using pure CSS.

Below is a float-based layout for our example based on the technique described in that article:

body {
padding-left: 200px;
padding-right: 190px;
min-width: 240px;
header, footer {
margin-left: -200px;
margin-right: -190px;
main, nav, aside {
position: relative;
float: left;
main {
padding: 0 20px;
width: 100%;
nav {
width: 180px;
padding: 0 10px;
right: 240px;
margin-left: -100%;
aside {
width: 130px;
padding: 0 10px;
margin-right: -100%;
footer {
clear: both;
* html nav {
left: 150px;

Looking at the CSS, you can see there are quite a few hacks necessary to get it to work (negative margins, the clear: both property, hard-coded width calculations, etc.) — the article does a good job explaining the reasoning for each in detail. Below is what the result looks like:

Click here to see a live example

This is nice, but you can see from the colors that the three columns are not equal in height, and the page doesn’t fill the height of the screen. These issues are inherent with a float-based approach. All a float can do is place content to the left or right of a section — the CSS has no way to infer the heights of the content in the other sections. This problem had no straightforward solution until many years later, with a flexbox-based layout.

Flexbox-based layout

The flexbox CSS property was first proposed in 2009, but didn’t get widespread browser adoption until around 2015. Flexbox was designed to define how space is distributed across a single column or row, which makes it a better candidate for defining layout compared to using floats. This meant that after about a decade of using float-based layouts, web developers were finally able to use CSS for layout without the need for the hacks needed with floats.

Below is a flexbox-based layout for our example based on the technique described on the site Solved by Flexbox (a popular resource showcasing different flexbox examples). Note that in order to make flexbox work, we need to an an extra wrapper div around the three columns in the HTML:

<!DOCTYPE html>
<html lang="en">
<meta charset="UTF-8">
<title>Modern CSS</title>
<link rel="stylesheet" href="index.css">
<header>This is the header.</header>
<div class="container">
<h1>This is the main content.</h1>
<h4>This is the navigation section.</h4>
<h4>This is an aside section.</h4>
<footer>This is the footer.</footer>

And here’s the flexbox code in the CSS:

body {
min-height: 100vh;
display: flex;
flex-direction: column;
.container {
display: flex;
flex: 1;
main {
flex: 1;
padding: 0 20px;
nav {
flex: 0 0 180px;
padding: 0 10px;
order: -1;
aside {
flex: 0 0 130px;
padding: 0 10px;

That is way, way more compact compared to the float-based layout approach! The flexbox properties and values are a bit confusing at first glance, but it eliminates the need for a lot of the hacks like negative margins that were necessary with float-based layouts — a huge win. Here is what the result looks like:

Click here for a live example

Much better! The columns are all equal height and take up the full height of the page. In some sense this seems perfect, but there are a couple of minor downsides to this approach. One is browser support — currently every modern browser supports flexbox, but some older browsers never will. Fortunately browser vendors are making a bigger push to end support for these older browsers, making a more consistent development experience for web designers. Another downside is the fact that we needed to add the <div class="container"> to the markup — it would be nice to avoid it. In an ideal world, any CSS layout wouldn’t require changing the HTML markup at all.

The biggest downside though is the code in the CSS itself — flexbox eliminates a lot of the float hacks, but the code isn’t as expressive as it could be for defining layout. It’s hard to read the flexbox CSS and get a visual understanding how all of the elements will be laid out on the page. This leads to a lot of guessing and checking when writing flexbox-based layouts.

It’s important to note again that flexbox was designed to space elements within a single column or row — it was not designed for an entire page layout! Even though it does a serviceable job (much better than float-based layouts), a different specification was specifically developed to handle layouts with multiple rows and columns. This specification is known as CSS grid.

Grid-based layout

CSS grid was first proposed in 2011 (not too long after the flexbox proposal), but took a long time to gain widespread adoption with browsers. As of early 2018, CSS grid is supported by most modern browsers (a huge improvement over even a year or two ago).

Below is a grid-based layout for our example based on the first method in this CSS tricks article. Note that for this example, we can get rid of the <div class="container"> that we had to add for the flexbox-based layout — we can simply use the original HTML without modification. Here’s what the CSS looks like:

body {
display: grid;
min-height: 100vh;
grid-template-columns: 200px 1fr 150px;
grid-template-rows: min-content 1fr min-content;
header {
grid-row: 1;
grid-column: 1 / 4;
nav {
grid-row: 2;
grid-column: 1 / 2;
padding: 0 10px;
main {
grid-row: 2;
grid-column: 2 / 3;
padding: 0 20px;
aside {
grid-row: 2;
grid-column: 3 / 4;
padding: 0 10px;
footer {
grid-row: 3;
grid-column: 1 / 4;

The result is visually identical to the flexbox based layout. However, the CSS here is much improved in the sense that it clearly expresses the desired layout. The size and shape of the columns and rows are defined in the body selector, and each item in the grid is defined directly by its position.

One thing that can be confusing is the grid-column property, which defines the start point / end point of the column. It can be confusing because in this example, there are 3 columns, but the numbers range from 1 to 4. It becomes more clear when you look at the picture below:

Click here to see a live example

The first column starts at 1 and ends at 2, the second column starts at 2 and ends at 3, and the third column starts at 3 and ends at 4. The header has a grid-column of 1 / 4 to span the entire page, the nav has a grid-column of 1 / 2 to span the first column, etc.

Once you get used to the grid syntax, it clearly becomes the ideal way to express layout in CSS. The only real downside to a grid-based layout is browser support, which again has improved tremendously over the past year. It’s hard to overstate the importance of CSS grid as the first real tool in CSS that was actually designed for layout. In some sense, web designers have always had to be very conservative with making creative layouts, since the tools up until now have been fragile, using various hacks and workarounds. Now that CSS grid exists, there is the potential for a new wave of creative layout designs that never would have been possible before — exciting times!

Using a CSS preprocessor for new syntax

So far we’ve covered using CSS for basic styling as well as layout. Now we’ll get into tooling that was created to help improve the experience of working with CSS as a language itself, starting with CSS preprocessors.

A CSS preprocessor allows you to write styles using a different language which gets converted into CSS that the browser can understand. This was critical back in the day when browsers were very slow to implement new features. The first major CSS preprocessor was Sass, released in 2006. It featured a new concise syntax (indentation instead of brackets, no semicolons, etc.) and added advanced features missing from CSS, such as variables, helper functions, and calculations. Here’s what the color section of our earlier example would look like using Sass with variables:

$dark-color: #4a4a4a
$light-color: #f9f9f9
$side-color: #eee
color: $dark-color

header, footer
background-color: $dark-color
color: $light-color

background: $light-color
nav, aside
background: $side-color

Note how reusable variables are defined with the $ symbol, and that brackets and semicolons are eliminated, making for a cleaner looking syntax. The cleaner syntax in Sass is nice, but features like variables were revolutionary at the time, as they opened up new possibilities for writing clean and maintainable CSS.

To use Sass, you need to install Ruby, the programming language used to compile Sass code to regular CSS. Then you would need to install the Sass gem, then run a command in the command line to convert your .sass files into .css files. Here’s an example of what a command would look like:

sass --watch index.sass index.css

This command will convert Sass code written in a file named index.sass to regular CSS in a file named index.css (the --watch argument tells it to run any time the input changes on save, which is convenient).

This process is known as a build step, and it was a pretty significant barrier to entry back in 2006. If you’re used to programming languages like Ruby, the process is pretty straightforward. But many frontend developers at the time only worked with HTML and CSS, which did not require any such tools. So it was a big ask to have someone learn an entire ecosystem to be able to get the features offered by a CSS preprocessor.

In 2009, the Less CSS preprocessor was released. It was also written in Ruby, and offered similar features to Sass. The key difference was the syntax, which was designed to be as close to CSS as possible. This means that any CSS code is valid Less code. Here’s the same example written using Less syntax:

@dark-color: #4a4a4a;
@light-color: #f9f9f9;
@side-color: #eee;
body {
color: @dark-color;

header, footer {
background-color: @dark-color;
color: @light-color;

main {
background: @light-color;
nav, aside {
background: @side-color;

It’s nearly the same (@ prefix instead of $ for variables), but not as pretty as the Sass example, with the same curly brackets and semi-colons as CSS. Yet the fact that it’s closer to CSS made it easier for developers to adopt it. In 2012, Less was rewritten to use JavaScript (specifically Node.js) instead of Ruby for compiling. This made Less faster than its Ruby counterparts, and made it more appealing to developers who were already using Node.js in their workflows.

To convert this code to regular CSS, you would first need to install Node.js, then install Less, then run a command like:

lessc index.less index.css

This command will convert Less code written in a file named index.less to regular CSS in a file named index.css. Note that the lessc command does not come with a way to watch files for changes (unlike the sass command), meaning you would need to install a different tool to automatically watch and compile .less files, adding a bit more complexity to the process. Again, this is not difficult for programmers who are used to using command line tools, but it is a significant barrier to entry for others who simply want to use a CSS preprocessor.

As Less gained mindshare, Sass developers adapted by adding a new syntax called SCSS in 2010 (which was a superset of CSS similar to Less). They also released LibSass, a C/C++ port of the Ruby Sass engine, which made it faster and able to be used in various languages.

Another alternative CSS preprocessor is Stylus, which came out in 2010, written in Node.js, and focuses on cleaner syntax compared to Sass or Less. Usually conversations about CSS preprocessors focus on those three as the most popular (Sass, Less, and Stylus). In the end, they are all pretty similar in terms of the features they offer, so you can’t really go wrong picking any of them.

However, some people make the argument that CSS preprocessors are becoming less necessary, as browsers are finally beginning to implement some of their features (such as variables and calculations). Furthermore, there’s a different approach known as CSS postprocessing that has the potential to make CSS preprocessors obsolete (obviously not without controversy), which we’ll get into next.

Using a CSS postprocessor for transformative features

A CSS postprocessor uses JavaScript to analyze and transform your CSS into valid CSS. In this sense it’s pretty similar to a CSS preprocessor — you can think of it as a different approach to solving the same problem. The key difference is that while a CSS preprocessor uses special syntax to identify what needs to be transformed, a CSS postprocessor can parse regular CSS and transform it without any special syntax required. This is best illustrated with an example. Let’s look at a part of the CSS we originally defined above to style the header tags:

h1, h2, h3, h4, h5, h6 {
-ms-hyphens: auto;
-moz-hyphens: auto;
-webkit-hyphens: auto;

hyphens: auto;

The items in bold are called vendor prefixes. Vendor prefixes are used by browsers when they are experimentally adding or testing new CSS features, giving a way for developers to use these new CSS properties while the implementation is being finalized. Here the -ms prefix is for Microsoft Internet Explorer, the -moz prefix is for Mozilla Firefox, and the -webkit prefix is for browsers using the webkit rendering engine (like Google Chrome, Safari, and newer versions of Opera).

It’s pretty annoying to remember to put in all these different vendor prefixes to use these CSS properties. It would be nice to have a tool that can automatically put in vendor prefixes as needed. We can sort of pull this off with CSS preprocessors. For example, you could do something like this with SCSS:

@mixin hyphens($value) {
h1, h2, h3, h4, h5, h6 {
@include hyphens(auto);


Here we’re using Sass’ mixin feature, which allows you to define a chunk of CSS once and reuse it anywhere else. When this file is compiled into regular CSS, any @include statements will be replaced with the CSS from the matching @mixin. Overall this isn’t a bad solution, but you are responsible for defining each mixin the first time for any CSS property requiring vendor prefixes. These mixin definitions will require maintenance, as you may want to remove specific vendor prefixes that you no longer need as browsers update their CSS compatibility.

Instead of using mixins, it would be nice to simply write normal CSS and have a tool automatically identify properties that require prefixes and add them accordingly. A CSS postprocessor is capable of doing exactly that. For example, if you use PostCSS with the autoprefixer plugin, you can write completely normal CSS without any vendor prefixes and let the postprocessor do the rest of the work:

h1, h2, h3, h4, h5, h6 {
hyphens: auto;


When you run the CSS postprocessor on this code, the result is the hyphens: auto; line gets replaced with all the appropriate vendor prefixes (as defined in the autoprefixer plugin, which you don’t need to directly manage). Meaning you can just write regular CSS without having to worry about any compatibility or special syntax, which is nice!

There are plugins other than autoprefixer for PostCSS that allow you to do really cool things. The cssnext plugin allows you to use experimental CSS features. The CSS modules plugin automatically changes classes to avoid name conflicts. The stylelint plugin identifies errors and inconsistent conventions in your CSS. These tools have really started to take off in the last year or two, showcasing developer workflows that has never been possible before!

There is a price to pay for this progress, however. Installing and using a CSS postprocessor like PostCSS is more involved compared to using a CSS preprocessor. Not only do you have to install and run tools using the command line, but you need to install and configure individual plugins and define a more complex set of rules (like which browsers you are targeting, etc.) Instead of running PostCSS straight from the command line, many developers integrate it into configurable build systems like Grunt, Gulp, or webpack, which help manage all the different build tools you might use in your frontend workflow.

Note: It can be quite overwhelming to learn all the necessary parts to making a modern frontend build system work if you’ve never used one before. If you want to get started from scratch, check out my article Modern JavaScript Explained For Dinosaurs, which goes over all the JavaScript tooling necessary to take advantage of these modern features for a frontend developer.

It’s worth noting that there is some debate around CSS postprocessors. Some argue that the terminology is confusing (one argument is that they should all be called CSS preprocessors, another argument is that they should just be simply called CSS processors, etc.). Some believe CSS postprocessors eliminate the need for CSS preprocessors altogether, some believe they should be used together. In any case, it’s clear that learning how to use a CSS postprocessor is worth it if you’re interested in pushing the edge of what’s possible with CSS.

Using CSS methodologies for maintainability

Tools like CSS preprocessors and CSS postprocessors go a long way towards improving the CSS development experience. But these tools alone aren’t enough to solve the problem of maintaining large CSS codebases. To address this, people began to document different guidelines on how to write CSS, generally referred to as CSS methodologies.

Before we dive into any particular CSS methodology, it’s important to understand what makes CSS hard to maintain over time. The key issue is the global nature of CSS — every style you define is globally applied to every part of the page. It becomes your job to either come up with a detailed naming convention to maintain unique class names or wrangle with specificity rules to determine which style gets applied any given element. CSS methodologies provide an organized way to write CSS in order to avoid these pain points with large code bases. Let’s take a look at some of the popular methodologies in rough chronological order.


OOCSS (Object Oriented CSS) was first presented in 2009 as a methodology organized around two main principles. The first principle is separate structure and skin. This means the CSS to define the structure (like layout) shouldn’t be mixed together with the CSS to define the skin (like colors, fonts, etc.). This makes it easier to “re-skin” an application. The second principle is separate container and content. This means think of elements as re-usable objects, with the key idea being that an object should look the same regardless of where it is on the page.

OOCSS provides well thought out guidelines, but isn’t very prescriptive on the specifics of the approach. Later approaches like SMACSS took the core concepts and added more detail to make it easier to get started.


SMACSS (Scalable and Modular Architecture for CSS) was introduced in 2011 as a methodology based around writing your CSS in 5 distinct categories — base rules, layout rules, modules, state rules, and theme rules. The SMACSS methodology also recommends some naming conventions. For layout rules, you would prefix class names with l- or layout-. For state rules, you would prefix class names that describe the state, like is-hidden or is-collapsed.

SMACSS has a lot more specifics in its approach compared to OOCSS, but it still requires some careful thought in deciding what CSS rules should go into which category. Later approaches like BEM took away some of this decision making to make it even easier to adopt.


BEM (Block, Element, Modifier) was introduced in 2010 as a methodology organized around the idea of dividing the user interface into independent blocks. A block is a re-usable component (an example would be a search form, defined as <form class="search-form"></form>). An element is a smaller part of a block that can’t be re-used on its own (an example would be a button within the search form, defined as <button class="search-form__button">Search</button>). A modifier is an entity that defines the appearance, state, or behavior of a block or element (an example would be a disabled search form button, defined as <button class="search-form__button search-form__button--disabled">Search</button>).

The BEM methodology is simple to understand, with a specific naming convention that allows newcomers to apply it without having to make complex decisions. The downside for some is that the class names can be quite verbose, and don’t follow traditional rules for writing semantic class names. Later approaches like Atomic CSS would take this untraditional approach to a whole other level!

Atomic CSS

Atomic CSS (also known as Functional CSS) was introduced in 2014 as a methodology organized around the idea of creating small, single-purpose classes with names based on visual function. This approach is in complete opposition with OOCSS, SMACSS, and BEM — instead of treating elements on the page as re-usable objects, Atomic CSS ignores these objects altogether and uses re-usable single purpose utility classes to style each element. So instead of something like <button class="search-form__button">Search</button>, you would have something like <button class="f6 br3 ph3 pv2 white bg-purple hover-bg-light-purple">Search</button>.

If your first reaction to this example is to recoil in horror, you’re not alone — many people saw this methodology as a complete violation of established CSS best practices. However, there has been a lot of excellent discussion around the idea of questioning the effectiveness of those best practices in different scenarios. This article does a great job highlighting how traditional separation of concerns ends up creating CSS that depends on the HTML (even when using methodologies like BEM), while an atomic or functional approach is about creating HTML that depends on the CSS. Neither is wrong, but upon close inspection you can see that a true separation of concerns between CSS and HTML is never fully achievable!

Other CSS methodologies like CSS in JS actually embrace the notion that CSS and HTML will always depend on each other, leading to one of the most controversial methodologies yet…


CSS in JS was introduced in 2014 as a methodology organized around defining CSS styles not in a separate style sheet, but directly in each component itself. It was introduced as an approach for the React JavaScript framework (which already took the controversial approach of defining the HTML for a component directly in JavaScript instead of a separate HTML file). Originally the methodology used inline styles, but later implementations used JavaScript to generate CSS (with unique class names based on the component) and insert it into the document with a style tag.

The CSS in JS methodology once again goes completely against established CSS best practices of separation of concerns. This is because the way we use the web has shifted dramatically over time. Originally the web largely consisted of static web sites — here the separation of HTML content from CSS presentation makes a lot of sense. Nowadays the web is used for creating dynamic web applications — here it makes sense to separate things out by re-usable components.

The goal of the CSS in JS methodology is to be able to define components with hard boundaries that consist of their own encapsulated HTML/CSS/JS, such that the CSS in one component has no chance of affecting any other components. React was one of the first widely adopted frameworks that pushed for these components with hard boundaries, influencing other major frameworks like Angular, Ember, and Vue.js to follow suit. It’s important to note that the CSS in JS methodology is relatively new, and there’s a lot of experimentation going on in this space as developers try to establish new best practices for CSS in the age of components for web applications.

It’s easy to get overwhelmed by the many different CSS methodologies that are out there, but it’s important to keep in mind that there is no one right approach — you should think of them as different possible tools you can use when you have a sufficiently complex CSS codebase. Having different well-thought-out options to choose from works in your favor, and all the recent experimentation happening in this space benefits every developer in the long run!


So this is modern CSS in a nutshell. We covered using CSS for basic styling with typographic properties, using CSS for layout using float, flexbox, and grid based approaches, using a CSS preprocessor for new syntax such as variables and mixins, using a CSS postprocessor for transformative features such as adding vendor prefixes, and using CSS methodologies for maintainability to overcome the global nature of CSS styles. We didn’t get a chance to dig into a lot of other features CSS has to offer, like advanced selectors, transitions, animations, shapes, dynamic variables — the list goes on and on. There’s a lot of ground to cover with CSS — anyone who says it’s easy probably doesn’t know the half of it!

Modern CSS can definitely be frustrating to work with as it continues to change and evolve at a rapid pace. But it’s important to remember the historical context of how the web has evolved over time, and it’s good to know that there are a lot of smart people out there willing to build concrete tools and methodologies to help CSS best practices evolve right along with the web. It’s an exciting time to be a developer, and I hope this information can serve as a roadmap to help you on your journey!

Special thanks again to @ryanqnorth’s Dinosaur Comics, which has served up some of the finest absurdist humor since 2003 (when dinosaurs ruled the web).

Modern CSS Explained For Dinosaurs was originally published in Actualize on Medium, where people are continuing the conversation by highlighting and responding to this story.

02 Mar 18:08

An argument for passwordless

A warning upfront: this is not an anti-password article, but my attempt at convincing you to consider alternatives to password authentication. I also include a primer on the different passwordless authentication techniques you might want to use. Aren’t passwords just awesome? The concept behind passwords is quite simple. You present the user with a “password” field and imply the instruction: “Please come up with a secret phrase that you’ll have to remember and protect indefinitely.
01 Mar 07:07

What’s wrong with estimates?

by Yorgos Saslis
21 Feb 20:52

12 best practices for user account, authorization and password management #MustRead #Security …

by (@francoisz)

12 best practices for user account, authorization and password management #MustRead #Security …

14 Feb 07:35

50 ans des Jeux olympiques de Grenoble: Un gouffre financier ou un pari réussi?

Gouffre financier ou véritable succès? Comment la ville de Grenoble a bénéficié de l'organisation des JO en 1968?

Gouffre financier ou véritable succès? Comment la ville de Grenoble a bénéficié de l'organisation des JO en 1968? — Mourad Allili/ SIPA

  • À l’heure où Grenoble célèbre les 50 ans de ses Jeux olympiques, retour sur le coût de l’événement.
  • La capitale iséroise a dû rembourser ses emprunts pendant plus de 25 ans.
  • Mais elle a bénéficié de l’aide exceptionnelle de l’État et utilise encore bon nombre de ses équipements.

Les Jeux olympiques un budget généralement difficile à maîtriser. Alors que PyeongChang s’apprête à accueillir le monde entier, Grenoble a tiré le bilan des siens depuis longtemps. La capitale des Alpes a vu les projecteurs braquer sur elle le 6 février 1968. Cinquante ans plus tard, l’addition paraît encore salée même si la note aurait pu être bien plus élevée.

>> A lire aussi : Pyeongchang 2018, la victoire des JO business

27 ans pour rembourser la dette

« Les JO de Grenoble n’ont pas dérogé à la règle. On a eu un dépassement du coût. Et la ville a été surendettée pendant longtemps », affirme Wladimir Andreff, professeur émérite et grand économiste du sport à l’université Paris 1 Panthéon-Sorbonne. « Elle a mis 27 ans à rembourser les sommes qu’elle avait empruntées », poursuit Pierre Chaix, membre du Centre de Droit et d’Économie du Sport de Limoges.

>> A lire aussi : Quelles traces ont laissé les JO de 1968 à Grenoble?

Les Jeux de 1968 ont coûté 1,1 milliard de francs. Et le déficit observé s’est élevé à 80 millions de francs. Les organisateurs avaient tablé sur la vente d’un million de billets. Finalement, il s’en est écoulé deux fois moins. Même bradés la deuxième semaine de l’événement. Quant aux recettes du comité d’organisation
 des Jeux olympiques grenoblois de 1968, elles ont été très pauvres : 36 millions de francs au final.

>> A lire aussi : Les JO d'Athènes en 2004 ont gonflé en partie la dette publique grecque

230 % d’augmentation d’impôts

Un gouffre financier que les observateurs nuancent malgré tout. « 80 millions de francs, ce n’est presque rien comparé au déficit de plus d’un milliard de dollars, enregistré après les Jeux de Montréal de 1976 », note Wladimir Andreff. À titre de comparaison, celui d’Albertville en 1992 a été de 285 millions
 de francs (43,5 millions d’euros) et Salt Lake City, en 2002, a perdu 168 millions de dollars.

Même si Grenoble ferait presque figure de bon élève en la matière, la ville a néanmoins été obligée de rembourser plus de 200 millions de francs. Pour se faire, Hubert Dubedout, le maire de ville, a augmenté les impôts locaux de 230 % en seulement trois ans. Mais pour les observateurs, la cité grenobloise a « limité la casse » grâce à deux facteurs. Tout d’abord l’intervention de l’Etat.

>> A lire aussi : Les Jeux olympiques, c'est bon pour l'économie d'un pays ?

Si les Jeux ont coûté cher, Grenoble n’a payé que 20 %. « Le reste a été pris en charge par l’État », explique Wladimir Andreff. « Car c’est la France qui a payé les Jeux de Grenoble ». À l’époque Charles de Gaulle, soucieux de redorer le blason de la France et de jouer à nouveau un rôle sur la scène internationale, comprend tout l’intérêt médiatique d’organiser des JO.

Ensuite, « il y a eu un effet modérateur de la dette due à l’inflation », précise Pierre Chaix. « Les sommes à rembourser sont devenues de fait assez rapidement faibles. Le coût de la vie a augmenté mais pas les mensualités », poursuit-il

Gaspillage énorme ou pari réussi ?

Dans les esprits pourtant, le gaspillage est énorme. L’exemple le plus flagrant : letremplin de Saint-Nizier du Moucherotte, qui a coûté 5,9 millions de francs. « Il n’a pas été entretenu. Aujourd’hui, c’est une sorte de friche dangereuse qui menace de s’écrouler. On a tout simplement pollué un bout de la montagne pour le construire », déplore Wladimir Andreff. L’infrastructure a été utilisée jusqu’en 1989 pour les entraînements et quelques compétitions. Mais avec les changements de normes, la commune n’a pas eu les reins assez solides pour remodeler la piste de réception. Le site a été abandonné puis fermé au public pour des raisons de sécurité.

Un dossier a été déposé à la direction régionale des affaires culturelles (Drac) de l’Isère, pour que le tremplin soit classé patrimoine du XXe siècle, et donc préservé. Histoire de ne pas connaître le même destin que la piste de bosleigh de l'Alpe d'Huez, abandonnée après les Jeux et démolie depuis.

La question de savoir comment se débarrasser de l’anneau de patinage de vitesse s’est également souvent posée. « Refaire le système de réfrigération coûtait très cher. La solution a été de le transformer. Aujourd’hui, il n’a plus la finalité qui lui était destinée puisqu’il a été reconverti en piste de roller et de skate », poursuit Michel Raspaud, sociologue du sport. Quant au Stade des Glaces, qui avait une capacité de 12.000 places, il est devenu le Palais des Sports.

>> A lire aussi : Un an après les JO, à quoi ressemble Sotchi?

« La plupart des équipements sont toujours utilisés par les Grenoblois »

En réalité, « la majeure partie des équipements réalisés pour les Jeux s’est fondue dans le paysage », souligne l’urbaniste Dorian Martin. « La plupart des sites n’ont pas été abandonnés. Contrairement à Athènes ou Sotchi, dont les quartiers olympiques sont totalement désertés aujourd'hui, Grenoble avait pensé à organiser des Jeux réversibles ». Et de citer l’exemple du stade, où se sont déroulées les cérémonies d’ouverture et de clôture. « Il a été conçu sur un principe d’installation éphémère et a été démonté juste après l’événement ».

Le quartier Malherbe, qui accueillait le centre de presse, et  le Village Olympique sont devenus depuis des quartiers d’habitations où se concentrent plus de 2.500 logements. « La plupart des équipements et infrastructures réalisés sont toujours utilisés par les Grenoblois », résume poursuit Dorian Martin. « Les Jeux ont été un véritable accélérateur de développement urbain. L’Hôtel de police, les bretelles d’accès de l’autoroute, l’Hôpital Sud existent toujours ».

02 Feb 17:08

The Problem with Time & Timezones - Computerphile

by /u/rschiefer
02 Feb 17:04

XOR should be an English word

by webassemblycodecom-admin

Soup xor salad? This question is much clearer than Soup or salad. Why? As we are going to see in this article, the word XOR would not allow choosing soup and salad, which is not expected, but it is an allowed option when the word OR is used.

What is XOR anyway?

ADD, what do you do? I add. SUB, and you? I subtract. XOR? Me? Well I…

Comparing XOR and OR

Table for the XOR function:

0 0 0
0 1 1
1 0 1
1 1 0

Table for the OR function:

0 0 0
0 1 1
1 0 1
1 1 1

The only difference between XOR and OR happens for A=1 and B=1, where the result is 0 for XOR and 1 for OR.

Real Life, OR or XOR?

In real life we say OR, but usually the intention is actually XOR, lets see some examples:

Example 1:

Mom: Son, where is your father?

Son: Working on the garden OR on the couch watching the game.

One condition excludes the other, Dad can’t be at both places at the same time, he is either working on the garden or on the couch watching the game. We know Mom is sure that, given the options, he is watching the game…

Lets see all of this in a table. The Where is your father function:

Working on the garden On the couch Found! Comments
0 0 0 Not found! (unexpected)
0 1 1 on the couch (Mom knew it!)
1 0 1 working on the garden (improbable)
1 1 0 Invalid, he can’t be in two places at same time

The function returns 1 (Found!) when the inputs are exclusive. Exclusive here with the meaning of one different from the other.

Example 2:

Mom: Would you please buy ice cream, chocolate OR strawberry.

Son: Here are the ice creams, chocolate and strawberry.

One condition should exclude the other, but the son, very smart, used the inclusive OR. In this case Mom’s request was ambiguous. A non ambiguous request would be: Would you please buy ice cream, chocolate XOR strawberry.

The Reason for the XOR Name

Given both examples, I have found two different reasons for the name XOR, the first one sounds more reasonable than the second, but please let me know if you have a good source for the XOR name.

  1. XOR, exclusive OR, is TRUE when both inputs are exclusive or not equal.
  2. XOR, exclusive OR, excludes one of the OR conditions, XOR excludes the condition where both inputs are TRUE.

Again, I believe explanation 1) is more logical, but naming things are not a logical.

How to get to 0xDEAD

A teaser was left in Dissecting a Minimum WebAssembly module: How to get to 0xDEAD by XORing 0xFF00 and 0x21AD?

The simplest method to get to the result is to convert the numbers to binary and then apply the XOR table bit by bit:

0xFF00 é 1111.1111.0000.0000
0x21AD é 0010.0001.1010.1101
XORing:  1101.1110.1010.1101 -> DEAD

XOR is Also an Adder, or Half of it

Below is the A+B table, compare it with the XOR table.

0 0 0
0 1 1
1 0 1
1 1 0 and 1 should be added to the next bit (carry bit)

They are the same table, the only problem with the XOR as an adder, is that it can’t generate the carry bit for the condition where A=1 and B=1. This is the reason it is called half adder.

XOR Also Utilized in Cryptography

Lets go back to the example where XORing 0xFF00 and 0x21AD results in 0xDEAD. Lets name these numbers in cryptographic terms:

0xFF00 The original message, in the clear, unencrypted(M).

0x21AD The cryptographic key, both the sender and receiver know these number, this is the shared secret(C).

0xDEAD The Encrypted message(E).

To encrypt the message we use the following XOR operation: E=XOR(M,C)

Someone (an adversary) that reads the encrypted message 0xDEAD can’t figure out the original message 0xFF00 without knowing the 0x21AD cryptographic key, but the receiver can decrypt the message into its original form by applying this XOR operation: M=XOR(E,C). Here is an example with numbers:

0xDEAD é 1101.1110.1010.1101 -> Encrypted message
0x21AD é 0010.0001.1010.1101 -> Cryptographic key
XORing:  1111.1111.0000.0000 -> Original message was recovered!

In short: XOR makes cryptography possible because it allows recovering the original message:

M Original message.

C Cryptographic key

E Encrypted message.

To encrypt a message:


To decrypt the message:


Post MindMap
30 Jan 21:10

Un mini-robot mou pour explorer le corps humain

Un robot mou de 4 millimètres de long, contrôlé à distance et capable de se déplacer à l’intérieur du corps humain, a été mis au point par...
29 Jan 20:42

Tuto : l’identification Wi-Fi rapide avec les QR Code sur Freebox !

by Yoann Ferret
Plutôt de taper un long mot de passe, la Freebox permet de s'identifier sur son réseau Wi-Fi à l'aide d'un simple QR Code. Le temps de prendre une photo, et vous êtes connecté !
26 Jan 18:33

Update and Restart [Comic]

by Geeks are Sexy
23 Jan 20:07

Why should you care about commit quality

by /u/ted1158
23 Jan 20:03

Code alignment issues

by /u/sidcool1234