DailyThem.es is a social writing platform that wants to help English learners improve their language skills in an enjoyable and non-intimidating way. The concept behind the site is simple. Each day, users are encouraged to write 100 words about any subject they want, and then exchange feedback with other writers. Users also get access to analytics that tell them what language errors they tend to… Read More
Article courtesy of TechCrunch
The popular language learning platform Duolingo today announced that it has closed a $20 million Series C round led by Kleiner Perkins Caufield & Byers. As Duolingo’s founder Luis von Ahn told me, the company plans to use this new funding to continue on its path to dominate the language-learning market.
Duolingo previously raised a Series A round led by Union Square Ventured in 2011 and a $15 million Series B round led by NEA in 2012. Duolingo also counts Ashton Kutcher and Tim Ferris among its investors. The company’s previous investors also joined in this new round.
Von Ahn told me Duolingo now has about 25 million registered and 12.5 million active users. That’s up from about 10 million the company reported by the end of last year. This means that more people now learn a language with Duolingo than in the U.S. public school system. Von Ahn attributes at least a part of this growth to Apple choosing Duolingo as its iPhone App of the Year, which marked the first time the company choose an education app for this honor.
As for why the company decided to raise a new round, von Ahn told me that he received quite a bit of inbound interest from venture capital firms. While he talked to a few, Kleiner Perkins felt like the ideal partner, not in the least because Duolingo will work with Kleiner partner Bing Gordon who will also join the Duolingo board as an observer.
The company also plans to use the additional funding to ramp up its hiring. Duolingo currently has 34 employees — most of them engineers and designers — but plans to get to 50 in the near future.
Duolingo will soon release a groups feature that will make it easier for teachers to use the service in their classes (and track their students’ progress). Duolingo also expects that large companies, which now often use Rosetta Stone and similar tools to train their workforce, will start using this groups feature.
Von Ahn has a track record of building successful products based on these hybrid approaches that bring together human collaboration and technology. With reCAPTCHA, which he sold to Google, he turned CAPTCHAs into something useful by combining them with OCR technology.
With Duolingo, he is building a language learning and translation tool that is based on these same principles. The service teaches you a language, but at the same time, you are also using some of the practice translations to translate real sentences for paying customers.
Last year, the company partnered with CNN and BuzzFeed, for example, to translate some of those company’s articles into Spanish, Portuguese and French. The company is working on similar deals with other publishers and both CNN and BuzzFeed have renewed their original contracts.
Going forward, this will obviously be a major source of revenue for the company, but von Ahn also expects that Duolingo will open its self-service portal for translations within the next two months.
Duolingo doesn’t have an immediate goal to break even, though. “We have revenue, and that’s good,” von Ahn told me, but his plan for now is to grow the user base Most importantly, though, von Ahn wants to increase Duolingo’s user base (and that, in return, will also strengthen the translation side of the service). “Our main goal going forward is to become the de facto way to learn a language,” he told me.
When the company asked users why they use Duolingo, many said they considered it a game that is both entertaining and useful. This isn’t something Duolingo set out to do, but based on these findings — and with Bing Gordon among its advisers now — the company plans to add more game elements to its service in the future.
One thing von Ahn says he won’t do in the future, though, is pay for advertising. It has never spent a single dollar on ads and doesn’t plan to do so anytime soon.
Article courtesy of TechCrunch
Language learning software has never been particularly “sexy,” nor have its makers managed to produce a bounty of memorable user experiences, for that matter. While it might not be quite ready for the former, Duolingo is fast-becoming the poster child of a new generation of language learning products that are actually enjoyable to use, and able to combine fun with function.
In December, Apple named Duolingo the “iPhone App of the Year” for 2013, which, along with its nearly 9 million active users, seem to indicate that its gamified, mobile language learning formula is working. And last night, the startup capped off its banner year by taking home the Crunchie award for “Best Education Startup” of 2013.
While CAPTCHA, reCAPTCHA and Duolingo founder Luis Von Ahn wasn’t able to make it to the Crunchies this year, Duolingo’s new head of communications, Gina Gotthilf, was there to accept the award. We caught up with Gotthilf backstage to ask her a few questions about Duolingo’s success and where it’s headed next.
As to Duolingo’s origins, the language learning app owes its design and original concept to von Ahn and his student, Severin Hacker, who developed Duolingo as the basis for a translation service powered by students translating real-world sentences while learning a language. Today, this is has become a key point of differentiation for Duolingo in the increasingly-crowded world of language learning.
Rather than forcing its users to memorize phrases, sentences or words, the app turns to the Web, literally, to provide fodder for students to learn by translation. As users proceed through its lessons and programs, they translate the Web, reading and listening to the language as presented by real, native speakers. The app leverages video clips, images, sound bites and other interactive prompts to help students memorize and remember important words and concepts.
The other key to Duolingo’s success, especially among the millenial crowd, is its use of gamification, allowing students to earn points as they proceed through its lesson plans, docking students “lives” when they make mistakes. Get too many wrong and Duolingo makes you start over.
The app monitors your progress as you go, keeping tabs on which words and concepts you struggle with, serving its lessons accordingly. Today, Duolingo has added a total of six languages, including English, Spanish, Italian, German, Portuguese, and French, and more are on their way.
Just like von Ahn previously did with reCAPTCHA, the original idea behind Duolingo was always to use it as the basis for a translation service where Over time, Duolingo expects to turn this into a tool where businesses can order paid translations from Duolingo’s users. Currently, the service only uses its web app to show these real-world texts from blog posts, news articles and WikiPedia entries, but in the near future, the iOS app will also integrate these features.
It’s worth noting, though, that Duolingo will always remain 100% free for those who want to learn a language. As von Ahn stressed, this means there will be no ads, no freemium model and no “five-easy-payments” plan. The combination of the language learning and translation service, von Ahn says, is “a mutually beneficial arrangement for both parties: students receive a high-quality, completely free, language education, and organizations are given human-quality translation services.”
Duolingo, however, appears or produced many memorable experiences for that matter. Duolingo, the fast-growing language learning service,
the increasingly popular online language learning service, announced one of its most ambitious projects to date
Article courtesy of TechCrunch
I have a Jeep about half my own age, and despite the creaks in both our joints, we somehow manage to create a semblance of grace now and then. The vibration of the engine, transmitted through my the bones of my foot as it lies on the clutch (lightly enough not to feather it), or the degree and delta of centripetal force (unconsciously, I lean left to align my head with this off-axis down) explain wordlessly to me the limitations of the tires’ grip as I round a frosty curve, the elusive triple point that lies between momentum, throttle, and gearing. And I’m no racing driver — you have this loop, too, whether you drive a manual or automatic, whether you maneuver aggressively or defensively. It’s something that happens when you and the car reach an accord, so to speak.
A few Christmases ago I bought the family a great old axe, but at first its unfamiliarly short and straight haft made me more likely to split my own foot than the morsel of wood awaiting its sentence before me. Over the course of a few dozen swings I found it didn’t want to be wielded like an executioner’s axe, describing as many degrees of a circle as were warranted by the toughness of the wood, but it preferred to be brought down straight, like the guillotine. This necessitated a totally new movement of my hands and body but eventually it struck with greater power and precision than I had been able to muster with its modern, long-necked predecessor.
Between me and my Cherokee, and between my hands and the tool, and between you and many of the things you use every day, there is a complicated but elegant feedback loop, a physical dialogue, the topic of which is harmony of operation. The relationship that you build with a device, whether it’s a car, a hammer, a brush, a cello, or anything else, is a self-optimizing relationship. First you make it speak, then you make it sing.
Why does this matter? Because so few of the devices we are adopting today will ever sing like that.
It’s not just that things are complex. Driving a car is complex; the forces, sounds, visual input, motor coordination and everything else that goes into driving become second nature because we learn to operate the vehicle as an extension of ourselves. And it’s not just that things are virtual. Anyone who has had a complicated workflow and found themselves the master of ten windows spread over three monitors and two operating systems has juggled a dozen tasks and ideas, performing as complex a task as an orchestra conductor or jet pilot.
The problem is that we are introducing process that have maxima we can’t minimize, and minima we can’t maximize, by our own efforts. No axe is so difficult to use that you can’t master it in time. But no matter how good you are at using a smartphone, the elegance and quality of your process is, fundamentally, out of your hands.
With what devices and services today can you achieve the same level of synchrony as that you enjoy with your car as you parallel park, your fork and knife as you herd peas around your plate, your keyboard as you tap out a caustic response to this article at five characters per second?
I see exceptions for coders, who achieve a sort of second sight with the colors and symbology of their language of choice, for gamers whose thumbs make analog sticks and 256-stage buttons dance through a hell of bullets, and for photographers, their fingers blindly yet unfailingly seeking out dials and switches while the brain simultaneously calculates the arc of a ball or the fraction of a second left until the toddler’s smile strikes its apex.
But the most ubiquitous device of the modern digital era, the smartphone, is not susceptible to such talents. It may be always in your hand, but it never acts as an extension of it.
Oh, sure, you can learn the quickest way to get a picture through retouching and into Instagram — the “Save changes,” “Send to…” and “Submit” button positions memorized, the geotag set to automatic, the service sniffer set to repost and promote the latest item at the requisite SoLoMo watering holes. Congratulations, you’ve built a Rube Goldberg machine that mechanically duplicates button pressing. And what a profoundly inelegant series of arbitrarily-placed button presses it is, interrupted by unskippable dialogues, animations, and workarounds it is!
Have you ever remarked on the grace with which an iPhone user closes down unused processes? The casual dignity of a flick to bring down a notifications shade, the inhuman rapidity with which a home button is double-pressed? Of course not. You could practice button-pressing and menu flicking for weeks and your flicks and presses would be little or no more effective than anyone else’s.
Wearables? True, gestural tech and limb tracking like that of the Kinect or Myo adds an interesting new way to interact, but these things are meant to capture gross, simple, or repetitive movements; even if the nearly imperceptible twist of the wrist employed by a painter to add an ironic curl to the lips could be detected, would it matter? The threshold for whatever gesture he has indicated was reached long before such subtleties were taken into account. You think a photo will show more detail because you pinch-zoomed exactly along the 45-degree line? You think a page will load faster because you clicked at the exact center of the link?
As one last example: even in photography is the satisfaction of successful operation being eroded. Many lenses and systems do not actually connect the focus ring to the focal gearing, but instead read the position of the ring digitally, pass that information to the CPU, where its scale, jitter, and acceleration are weighed; this data is returned to the chip in the lens, which adjusts the focus approximately the amount it thinks you would have wanted it to move, had it been mechanical to begin with. Naturally this takes time and is rarely satisfying or accurate. But even if it were advanced to the degree it were imperceptible, it would still be inferior to the mechanical process because it is a simulation of it; if it advanced beyond this, and predicted your focal point (let us, against all odds, assume this works flawlessly), it is no longer you operating the mechanism or the simulation of a mechanism, but rather using a ring-shaped menu to select from a list of subjects. Just try to make that sing.
There’s no room for finesse or subtlety in these things because we are not the ones performing the work, or rather, we perform only a small part of it and set into motion a series of events over which we have little or no control. The digitizer, the processor, the transceiver, the microwave repeater, and the server do their work following, but independent of, our input. And before we could even do our part, the developer of the app, the developer of the firmware, the developer of the OS had to do theirs. Layer upon layer of things that you are not doing, that you can never effect, only activate.
I don’t pretend this is the end of doing things well, of course, or any other such absurd extrapolation. But I myself, and I suspect this is true of many others, get no little satisfaction from the process of doing things well, though, and here before us is a generation of tools which can only be instructed to carry out tasks, something you and I will never do better or worse than one another. Egalitarian? Democratic? That’s a charitable interpretation, if you ask me. Eliminating the necessity of doing something well could be a positive change. Eliminating the possibility of doing something well is a negative one.
Still, it’s not so dire as I make it sound. The consequence of all this is that there is more room to excel on a different stage, a higher one. If everyone has access to the same resources, it is the one who makes the best of them who takes the prize. Given the finest ingredients and top-notch facilities, no two chefs will produce the same meal. With the same light and the same camera, two photographers capture images that are worlds apart. So this embarrassment of riches comprising (among a hundred other things) the Internet, the social media landscape, and our fantastically powerful mobile devices is nevertheless empowering — but it is no longer the tools with which we interact with that we must make to sing, but what we are making with them.
No one can use the Facebook app better than another — but one may use the network to greater effect. No one can apply a filter with more finesse than another — but one may assemble a superior portfolio. No one can make an API return different data than another — but one may put that data to better use. No one can propagate an email through the network faster — but one may be more persuasive. The axe swings itself — but you can still build a better fire.
Article courtesy of TechCrunch
Late last year, a couple of venture firms sought to invest in LEAP Motion’s gesture control technology by helping developers to build businesses around it with an accelerator. Today, the LEAP Axlr8r is opening for business and announcing the first 10 participating companies in the program.
LEAP Motion has built an $80 hardware device that allows any user to control what’s happening on their computer through an interface that tracks the movement of their hands. It’s had more than 70,000 developers sign up to test out and build apps for the device, but few actual apps have been launched so far.
The LEAP Axlr8r seeks to change that by taking LEAP Motion’s technology to the next level. With backing from Peter Thiel’s Founders Fund, as well as SOSVentures, the firm behind hardware accelerator HAXLR8R, the incubator sought out startups doing interesting things with the next-generation gesture control platform.
Like other incubators, LEAP Axlr8r provides participating companies with a small amount of funding — in this case, $25,000 — and puts them through a three-month program that is designed to refine the products and services they’re seeking to build. Housed near LEAP Motion headquarters in San Francisco, those companies will have access to the engineers who built LEAP Motion technology, as well as a number of mentors who can help with other aspects of the design process.
The whole thing ends in a Demo Day on May 9th. We’ll be tracking their progress and are looking forward to seeing what they release. The first 10 companies participating in the accelerator include:
Article courtesy of TechCrunch
On January 26, 1974 the world changed. A panoply of creatures popped into existence – Owlbears lumbered out of the woods while Bullettes snuffled out of caves, blinking in the sunlight. Adventurers donned metal plate armor and led their ambling pack horses into darkened dungeons. Traps sprung, capturing teams of dwarves in iron nets while gold glittered tantalizingly close to a shambling skeleton. For some, the ’70s were an era of free love. For others they were the era of untrammeled adventure.
Dungeons & Dragons, the pen-and-paper role-playing game that engulfed so many of our childhoods, turned 40 this weekend to little fanfare. Originally created by Gary Gygax and Dave Arneson, the game built on many RPG mechanics that were beginning to percolate out of board gaming at the time. Like Diplomacy before it, D&D games were engrossing romps through worlds yet unvisited by the player. However, unlike the traditional board and die drivel, D&D borrowed from the popular mythologies of the day and the strange world of J.R.R. Tolkein and his ilk. The result – an all-encompassing game that took cues from improvisational theatre, model-building, war gaming, and political strategy – has become a signifier in the tech world and an interstitial generation of pre-electronic nerds as much as the bong and bell-bottoms defined the generation that spawned these role-players.
Things didn’t look good for D&D parent company TSR when they were bought out by Wizards Of The Coast who created the third and fourth editions of the game. A fifth edition is planned for this year.
D&D created nearly everything we hold dear. The games we love today – from Skyrim to Halo – all owe gratitude to the sweeping expanse of D&D gaming. Western fantasy is basically one long D&D game played solo while even the team structures and hierarchies we’ve come to use in business – from black belt consultants to agile teams – owe something to the camaraderie of kitchen-table dungeon crawls and the language of the Internet (arch, knowing, snarky, and at the same time earnest and obsessive) is the language of the RPG gamer.
It is hard to overstate how culturally important D&D was. It made us nerds powerful, and gave us a sword, a shield, and a dragon to slay in some dark corner. Like Bilbo tasked with robbing Smaug, we take the lessons of D&D and make beautiful art, conduct business, and interact as humans. It’s something we need and I’m glad it was born.
[Image from Diterlizzi which you should read if you want to learn where a lot of the original monsters came from.]
Article courtesy of TechCrunch
Baidu, the company best known for making China’s top Internet search engine, just made its translation app available for iOS. The descriptions for the iPhone and Android versions are in Chinese (search for “Baidu Translate” in the stores), as are its intro/instruction screens, but don’t worry–the app’s user interface is mostly in English.
One of Baidu Translate’s key features is voice translation for English, Mandarin and Cantonese, which I found to be very accurate. It’s a nice free alternative to other voice translation apps, most of which only offer Chinese support as an in-app purchase. But that’s not the coolest part. I am having the most fun with Baidu Translate’s image recognition feature, which lets you take a picture of something and circle it. The app then attempts to identify the object and tell you what it is in Chinese and English.
The image translator’s accuracy is currently hit-and-miss, with often hilarious mistakes. It correctly identified a picture of my kitten (though it called her a Norwegian Forest Cat, when she is actually a Taiwanese Alley Cat), an Xbox 360 controller (but not by brand–Baidu Translate just described it in English as a “wireless handle”), a tin wind-up toy, a mug, a pen, and a chair. But the app thought my Homer Simpson Pez dispenser was a “cartoon duck” (to be fair, he does look like a yellow rubber ducky) and it described an image of Harry Potter as a “braised carp head.”
Shortcomings aside, Baidu Translate is the best time I’ve ever had with a translation app (I can’t believe I just typed that sentence). Google Goggles can also be used for translations, but the process of getting results in another language is more complicated and it often does not recognize images (or even attempt to tell you what it is, like Baidu Translate).
The app’s camera has the bonus of allowing you to switch between the image recognition and optical-character recognition features on the same screen. I found that the latter’s accuracy also varies widely, but that’s true for most OCR apps.
Baidu, which is often referred to as “China’s Google,” employs many talented data scientists, so I expect the app’s image recognition feature to improve quickly as more people use it. Kaiser Kuo, Baidu’s director of international communications, told me that the company’s Silicon Valley-based Institute of Deep Learning, which focuses on artificial intelligence, will continue to work on all its features.
“We don’t have release dates fixed yet for future iterations but we’ll continue to improve all aspects of it: OCR, voice recognition, and of course image recognition accuracy. Image recognition is of course extremely tricky: You don’t always know from a flat image the size of an object, and variation in lighting, in shading, in angle and so forth for a three-dimensional object will make this challenging for some time to come, but we’re very proud of what we’ve managed so far,” Kuo said in an email.
That means there will be less amusing mistakes, but hopefully updates will turn Baidu Translate into an increasingly awesome addition to all the great language learning and translation apps that have been released over the last couple of years.
Article courtesy of TechCrunch
Amazon’s plans for autonomous flying delivery drones are so last year. The ecommerce juggernaut is purportedly working on something far more dystopian: pre-shipment.
Amazon has filed a patent for a shipping system designed to cut delivery times by predicting what buyers are going to buy before they buy it — and shipping products in their general direction, or even right to their door, before the sales click even (or ever) falls.
Which really is one more step towards cutting out human agency entirely from the ecommerce roundabout. Why not have machines autonomously buy stuff from other machines and have a third set of autonomous bots deliver it — while the quaking flesh recipients who open the door meekly accept whatever packages they are getting in the hopes that yet more machines won’t decide today is the day to harvest their organs.
[Aaaand right on cue, the doorbell rings. It's a delivery man, with -- you guessed it -- an Amazon parcel for me. This interaction should be entirely normal but there's a distinctly sinister undertone, even though I'm 99.9% sure that the thing inside the box is something I ordered last week, not something Amazon thinks I'll want to order next week. Or the thing I ordered a few minutes ago. But that, presumably, is exactly where Amazon is aiming to go.]
The patent, which was filed in August 2012 and granted December 24 last year, describes a method for what Amazon calls “anticipatory shipping” — with one pre-shipping scenario (of the multitudes detailed) being as follows:
…a method may include packaging one or more items as a package for eventual shipment to a delivery address, selecting a destination geographical area to which to ship the package, shipping the package to the destination geographical area without completely specifying the delivery address at the time of shipment, and while the package is in transit, completely specifying the delivery address for the package.
The anticipated location of packages might be determined by analysing various “business variables”, according to the patent. Data that could be analysed to determine customer demand for a particular pre-shipped package to help decide where to route it geographically could include historical buying patterns, preferences expressed explicitly via surveys/questionnaires, demographic data, browsing habits, wish-lists and so on.
The patent also goes on to discuss in detail various scenarios for “speculatively shipping” packages to destinations and how to re-route items based on proximity to potential customers — and even how packages might remain in near continuous transit on trucks until a customer makes a purchase.
At times the language of the patent sounds as if Amazon is thinking of physical item delivery in the way a utility might approach supplying water or electricity to homes — by forecasting demand spikes and lulls, and tweaking its pipeline accordingly, but above all by keeping the stuff flowing (ergo having trucks constantly filled with packages in continuous perpetual motion).
Such a system would likely require an overhaul of its existing ecommerce inventory and time-management systems (assuming Amazon hasn’t already started deploying the apparatus to support anticipatory shipments) — to make them more dynamic and responsive. But that in turn may allow for better inventory management, as the patent notes
… speculative shipping of packages may enable more sophisticated and timely management of inventory items, for example by allowing packages to begin flowing towards potential customers in advance of actual orders.
And in those instances when the demand prediction algorithm fails, as well it must, the patent suggests Amazon could deliver the package anyway — as a gift to someone who hasn’t actually clicked to buy it yet, but who, its data analysis suggests, might quite like it — i.e. if the cost of returning/rerouting the item exceeds the cost of paying a surprise visit to a pre-customer.
Which could either be a great surprise, or hideously inappropriate — depending on how good an oracle Amazon’s algorithm turns out to be. Inappropriate like delivering a DIY Will pack to someone who has already died, say. Or kids toys to bereaved parents. Anticipatory algorithms are going to have to navigate plenty of human pitfalls if they’re not to end up clanging on the doorbell.
In the U.S. Amazon paved the way for carving a huge chunk out of the ecommerce pie by patenting the right to one-click buying, all the way back in 1999. That patent has stood it in excellent stead over the years, requiring other ecommerce players to license this method if they want similarly speedy checkout processes.
Pre-shipping has the potential to let Amazon do that again by taking the online buying process to the proverbial ‘next level’ — and some. Clicking buy and getting your stuff hours or minutes later would be huge. Albeit, in future it could well be a case of: Amazon users, be careful what you wish(-list) for.
(Via the Wall Street Journal)
Article courtesy of TechCrunch
Let’s talk about doge, but first let’s talk about the late great David Foster Wallace, who 13 years ago wrote a classic essay about modern English* entitled “Tense Present,” which, realistically, is better than anything I will ever write, so I should maybe just point you to it and end this post here.
But I won’t. Not least because I strongly suspect that if DFW had not taken his own life five years ago, he would already have updated “Tense Present” for the modern era. He almost would have had to.
It is instructive that his essay includes the phrase You don’t (despite withering cultural pressure), have to use a computer, but you can’t escape language. That may have been true, just, in 2001, but it is not true today. You cannot escape computers any more — and that fact has affected language in a way which is, if you ask me, nothing short of revolutionary.
Once upon a time, not so long ago, most people didn’t write much, and even if they did, only a tiny handful of people might read the results. As a result, most of the words that people read were written by a tiny elite group of authors and journalists, and almost exclusively in an anodyne, pristine mode which DFW in his classic essay called SWE, for “Standard Written English.” (Also “Standard White English,” but I’m not even going to go there, except to say again that you should read his essay.)
I’ll go further and say that the overwhelming majority of widely-read nonfiction was written in an even smaller, strictly controlled subset of SWE – call it CSWE, for Clinical Standard Written English. Textbooks. Cookbooks. IRS instructions. The New York Times, the Wall Street Journal, the British broadsheets, etc. All written in a similar mode: authoritative, declamatory, distant, dispassionate, impersonal, and (allegedly) neutral. Formal, pure, and precise.
The problem, of course, is that English, as actually used by 99% of its practitioners, has never been even close to formal, pure, and precise. As James Nicoll** famously put it:
The problem with defending the purity of the English language is that English is about as pure as a cribhouse whore. We don’t just borrow words; on occasion, English has pursued other languages down alleyways to beat them unconscious and riffle their pockets for new vocabulary.
So why did SWE become the standard? DFW answers:
The real truth, of course, is that SWE is the dialect of the American elite. That it was invented, codified, and promulgated by Privileged WASP Males and is perpetuated as “Standard” by same. That it is the shibboleth of the Establishment and an instrument of political power and class division and racial discrimination and all manner of social inequity.
Easy enough to perpetuate when only a tiny elite were writing the words that most read; but now is different. Now is the era of social media. Now people are both reading and writing more words, by far, than they ever have before — which is great, right? — but only a small and diminishing fraction of those words are written in SWE.
Once upon a time, high-school teachers and broadsheet newspapers and their ilk defined how English was written, and the few semantic scofflaws were the linguistic equivalent of outlaw renegades. No longer. Now that definition is provided by Reddit. Nowadays we have different online dialects for cats and dogs, and people actually use both. Nowadays even scholarly articles may include a “tl;dr” summary. Nowadays: