Thought pieces Showerthoughts to researched opinion. May display signs of intelligence.

The Future of City Design: Fractals, Daniel Brown and the City of God

Mandelbrot fractals, a paralyzed designer from the UK, and a series of mesmerizing computer-generated cities might define the future of architecture.

It was Arthur C Clarke who introduced me to fractals.

To be specific, it was The Ghost from the Grand Banks, a novel he published in 1990.  In it, Edith Craig, a mathematician, spends her time plotting the Mandelbrot set on a computer screen. Eventually, drawn to the strange Neverlands it sketches on her screen, Edith becomes one of a strange band of reclusive explorers who spend their lives mapping out the far corners of the Mandelbrot set. They share little discoveries – ‘islands’ and ‘seas’ – real-life analogues drawn by a mathematical equation.

It was probably not Clarke’s best work. I barely remember the main plot. But the fractals stuck with me. When I managed to find a Java applet that drew the M-set out for me, I clicked and prodded in childish wonder at the nebula-like multicolored Neverland it built for me.

From Malin Christersson’s blog. From left to right, the fractal is being zoomed in – and you see the features developing as we do so. Some of these visuals even have names. Seahorse Valley. Elephant Valley.

I saw trees, islands, neurons- enough to keep me occupied for a long time. You see even stranger things when you go 3D.

The tribox, from http://www.skytopia.com/. Notice the Alien-esque visuals.

Ever since then, I’ve been not-so-systematically cataloging the uses of fractal visualizations. Out of everything I’ve seen, the most eye-catching use is …

City design.


Daniel Browns (http://danielbrowns.com/) is an architect, designer and programmer. He’s pretty famous in certain circles – especially anything that has to do with generative architecture. Both Jonathan Ive (Apple) and William Gibson (Neuromancer) are fans of his work, and that’s high praise.

I first came across his work in 2013, when the Daily Mail (UK) ran an article about a paralyzed designer who created flowers out of mathematical formulae.

Like these.

Daniel used an algorithm to create a flower form that he then adapted to create new, individual ‘species’  all of them distinct, but so plausible-looking.

Throw time forward a bit, and Daniel Brown pops up on my feed again; this time, with something far more complex: the City of God.

‘The City Of God is a personal project to come out of my research for the flowers commission of the Four Seasons Dubai,’ he writes on his Flickr. ‘The idea was to investigate parallels between the super-complex geometries seen in Islamic architecture and super-detailed recursive fractal patterns.’

This hyper-complex set of patterns is mesmerizing. The link to Islamic architectural geometries is interesting: even more interesting is the fact that some of these shots actually look like a city, in the future – an architectural style that doesn’t exist right now, but might.

But the real caveat is Dantillon: the Brutal Deluxe.





This is, undoubtedly, a city. it’s a strange city – but it looks like something that might very well exist in a hellishly overcrowded future. It even looks strangely organic, as if it was grown and not entirely planned.

“Brown begins with a program by plugging random numbers into the program, which uses fractal mathematics to create unique shapes that resemble a 3-D graph,” writes Wired, which spoke about Brown in 2016. “He spends several hours “exploring” the terrain until he finds an interesting form. Brown isolates the shape, and tweaks it until he arrives at something he likes. Then the program applies bits and pieces of public domain photos of 1970s apartment buildings. The result is hulking, maze-like structures that appear to go on forever.”

If you look close enough you can even make out the windows in the apartments. William Gibson actually used it for the the new cover of Neuromancer, saying it was exactly like he’d imagined the Sprawl to be.


What does this mean?

Leave out the bizarre beauty of the whole premise. Leave out the artistic value for now.

Cities are constantly evolving entities. And right now, our cities are a bit inefficient. Yes, some are beautiful, but they generally do a bad job of providing space for every who wants to live there – hence the steep costs of living. There’s absolutely no question that our cities, by process of acquisitions, developments and designs, will head towards a Dantillon or City of God-esque future.

More Dantillon.

This is where fractals play in. Perhaps architects can use this kind of fractal-based architecture for ideas they never would have come up with on their own. Perhaps we might use them for futurism: if we know they match existing architectural patterns, I can imagine software using a blueprint of, say, London, to show how it might organically grow and look in the future. The fact that fractals are capable of mimicking human architecture – and not just road networks and clusters of buildings, but more detail like in Dantillon – makes this strange art absolutely unavoidable.

Or perhaps – as we rebuild over existing cities, even colonize new planets – we might build our cities the way Daniel Brown builds his: feed numbers to a program and watch the fractals grow.

Footnotes: 


To learn more about M-sets: http://mathworld.wolfram.com/MandelbrotSet.html

To explore Daniel Brown’s work:
http://danielbrowns.com/
https://www.flickr.com/photos/play-create/albums

By

Read More

Ai on the stock market – Babak Hodjat and the death of the stockbroker

The computer scientists are coming for Wall Street.

Image from Sentient Technologies

One of the most-watched companies these days is a hedge fund in Silicon Valley called Sentient Technologies.  Sentient is led by a scientist called Babak Hodjat, and Hodjat has created a hedge fund run entirely by artificial intelligence.

Warren Buffet is famous for saying that nobody really knows what the stock market is going to do. Babak Hodjat and his his team are betting that a machine learning system, trained on the right datasets, can figure it out.

This is brilliant.

Sentient and Babak Hodjat are not an anomaly: they’re the tip of an iceberg in a sea that’s been heading this way for years. Neural networks and genetic algorithms have been used for predicting stocks for quite a while now.

I Know First does it. Bridgewater Associates, the world’s largest hedge fund (they sound very stiff, don’t they?) are building systems to automate the running of the entire company; they say they’ve already got machine prediction systems doing their thing for fund strategy.

In fact, stock markets are perfect for machine learning (to use the real term). Large amounts of data at high velocities – it’s actually surprising that it’s taken us this long to come to the point where the hard math and computing is being applied to Wall Street. What gives?

 


One: the efficient market hypothesis – the idea that we operate in markets so efficient that the moment some information becomes available about a stock, the sellers and buyers do their thing, and thus the price of a stock accurately reflects all known information.

It’s a very elegant theory, but we don’t seem to have efficient markets. Information asymmetry is a huge problem. Even if we take out insider trading and front-running and all that monkey business, the simple fact is that no human can look at all of the millions of transactions happening every single second and make the decisions that need to be made. And if those decisions can’t be made, then no, the stock prices aren’t the sum of all known information about the stock.

Two: the Random Walk. Behold:

Burton Malkiel, in his influential 1973 work A Random Walk Down Wall Street, claimed that stock prices could therefore not be accurately predicted by looking at price history. As a result, Malkiel argued, stock prices are best described by a statistical process called a “random walk” meaning each day’s deviations from the central value are random and unpredictable.

– Wikipedia

Basically, a poor mathematician back in the 70’s confused stock markets and quantum theory. He didn’t realize what we could do with big data and the right set of ML algorithms


 

Where does this take us?

Well, for one, a lot of Wall Street are going to lose their jobs. Stockbrokers – the kind we saw in the Wolf of Wall Street – will be dead soon. Instead of Leo DeCaprio, companies will be run by enormously complex prediction systems.

And people will entrust their money to these companies based on the kind of systems they’re running. The research and discoveries in this field are likely to be closely guarded commercial secrets, so we’ll see people picking Hedge Fund A over B because they believe A has better algorithms and better data scientists. Someone will set up a niche fund that does bizarrely well in trading energy interests. Some will generalize. Vanguard will probably have its own Ai running the show.

The average stock broke firm will no longer be a bunch of MBAs, but a handful of computer scientists like Babak Hodjat, sipping decaf lattes in Silicon Valley.


(Interestingly, this brings us back to that efficient market hypothesis. If everyone was running a massive machine intelligence system capable of looking at all of those billions of data points and making reasoned predictions, then the stock would be a better indicator of available information.)

 

By

Read More

The ripple effects of writing

When I was a child, I wanted to be an author.

Unfortunately, my father, who was a smart man, took me aside and said to me, “Son, you might enjoy reading, but you’re never going to make money writing.”

Now, years later, I’ve proven him wrong, but he had a point. The world was changing. This was the 90s: in the course of the next two decades, cell phones invade our lives and turned into smartphones, the Internet came along and wired us all up together, technology – especially consumer technology – accelerated at lightspeed.

Now I, and every other 90’s kid, had the great fortune to be born right into the era of biggest change. We’ve seen our favourite stories go from massive series of books to massive TV series. We’ve watched friends and colleagues put aside their books and blogs in favor of shorter and shorter tweets and more and more images and video from Instagram. We’ve watched video explode until YouTube became the second biggest search engine on the planet. We’ve watched the amount of attention that we give a piece of writing decrease until our favourite articles are now not New York Times articles, but cat pictures and lists from Buzzfeed.

Now of course, this sounds like terrible news for us writers. Every piece of research out there is telling us that more people prefer consuming video than text, that you shouldn’t write long articles – John Oliver, for example, now reaches more people than any New York Times journalist ever did. That’s real change. Common sense and content experts both say the written word is dead. Get on Snapchat. Get on Instagram. Get on YouTube. Or get out. Unless you’re a hot babe or a cat, you have no chance of being heard anymore.

 


 

But is that true? A year ago, I started investigating this question. Should we, as writers, give up and go back to school, learn how to crank out videos?

I found something interesting.

Despite SEO guidelines and research on attention spans, longform written content isn’t dying: it’s thriving. The New Yorker hasn’t shut down. Nor has the Guardian. Nor have longform tech sites like Anandtech or Arstechnica. Generally, businesses that depend on people reading their content are the first to adapt. Instead, Buzzfeed News now does some of the finest journalism in the world. They recently ran a massive, 4700-word investigation on a scandal involving Sri Lankan banks, done by Chris Hamby, a Pulitzer prize winning journalist. The company that made short, meaningless drivel a thing is investing into actual researched, longform writing.

It’s not just news sites. Three of my favourite sites in the world are Aeon, Brainpickings  and WaitButWhy. Brainpickings and Aeon assembles the thoughts of scientists, intellectuals and prodigies on subjects. WaitButWhy picks apart ideas with precision and great clarity. Both write reams. You can easily open up any of these three and find three thousand new words waiting to be read.

It’s not even just sites. Pocket, an app designed to let people save stuff for later reading – and which I use – apparently has 20 million users who have collectively saved 2 billion articles to read on their phones. Amazon US, in January, sold over 1 million paid ebooks a day – that’s about 2.1 billion US dollars a year on books.  People. Are. Reading. As much as we love House of Cards, we also, apparently, read. And in fact, going back over my work, my most-shared articles are longform.

So are the data analysts wrong and SEO experts wrong? Are they misreading the number of people who just share memes and obsess over TV series on Twitter?

No. They’re right, too.

 


 

The way I see it, there is a polarized curve of content. On one end of this curve you find 3,500-word undercover reporting on the situation on the Palestine border. Stuff like the Panama expose, the Snowden reports, Michio Kaku’s books, they’re here. On the other side of the curve, you have celebrity gossip, click-bait SJW articles and listicles on cat memes.

content-curve

There’s no inherent good or bad, but people consume the entire curve. We know the frequency of consumption is in inverse proportion to the complexity of the subject; we also know that it gets easier to consume content as we go from the complex end to the simple end. However, where we go wrong is by assuming that because the numbers are stacked on one side, the other end is wrong. Where we go wrong is in saying things like “a successful article has to be 500 words in length, not more,” or “a successful novel has to be 60,000 words or less.” Not true.

The actual value in writing, and in writing large, long pieces of content, is that they become the source for many of what later becomes content across this entire curve. Malcolm Gladwell’s Tipping Point turned from a 1996 essay for the New Yorker into a theory of social situations.  Once you create an original, well-researched, long piece of writing, it lives on. As quotations. As influence. That’s why the 2013 report by the Guardian and the Washington Post on the NSA’s surveillance shook the entire world. That’s why we all know of PRISM and the NSA despite so few of us actually having read and shared those original articles. This is why Brainpickings and WaitButWhy are viable. Good writing is hard to produce and will be read by fewer people, but will have longer potential effects on the world. Picture it as a scientist writing a research paper. He knows only a few other scientists and a few journalists will read it, but then those journalists will decipher it, simplify it, package it and spread on the idea.

This is why, despite the mathematics of views and clicks and social media shares, longform still exists. This is why most of John Oliver’s news is sourced from investigative journalists – a fact that he acknowledged in a recent tribute to journalism.

By attempting to make that complex end of the curve bend more and more towards the averages, we’re doing it – and us- a disservice. It’s a bit like looking at the world’s population, realizing that 50% is male and 50% is female, and writing a report saying the average human has one breast and one testicle.

Confused?

Confused?

The lengthy, written word becomes a mineable treasure trove that the rest of the curve slowly unpacks, disseminating it until it becomes a meme and someone photoshops a cat next to it.


So what can we – writers, bloggers, journalists take away from this?

Firstly we need to pick where we want to stand on that curve.

We can draw a lot of lessons from the fact that the most popular sites have a lot of very shallow content.

Consider Alexa’s ranking for the top sites in Sri Lanka. Weed out the search engines, social media and porn sites, and the top sites are HiruFM, GossipLankaNews and HiruNews. All of which are little more than gossip sites blasting out crap into the ether.

Now map this onto your writing. Are we going to be a) that relatively little-read, but influential blog on, say, third-world economics and social structures? Or are we going to be b) chasing popularity? Remember that Justin Bieber has more fans  than Bach right now.

If popularity is what you're after, you're going to have to dumb yourself down a bit.

If popularity is what you’re after, you’re going to have to dumb yourself down a bit.

It’s the typical researcher / reporter analogue even if you dial into single subjects. As a researcher, very few people may read what you write; as a journalist reporting on research, many people may read your article, but what they will ultimately take away is the researcher’s message.

Einstein changed the world - but how many of us have read his papers firsthand? Most of us read the people writing about his ideas.

Einstein changed the world – but how many of us have read his papers firsthand? Most of us read the people writing about his ideas.

The ripple effects of one end of the content curve is higher than the other, but comes at the cost of clicks, views and such immediate rewards. Each of us need to pick a place on this curve that keeps us happy.

Secondly: reconsider the mediums we specialize in. Despite everything I’ve said here about the written word, nothing changes the fact that we live in a world of mixed media – podcasts and video are often as powerful text, sometimes even more. I’ve met many former bloggers who’ve just given up. Video is inherently further along the curve towards the simpler end, because it’s easier to consume; and thus makes it easier to pick up the numbers you want.

(Unfortunately, it’s still just as hard to make a living in those fields as it is to be in writing. You think your blog doesn’t make anything? Try being a YouTuber and competing with the roughly 81 million other videos online.)

The good thing is behind every great podcast and talkshow episode is a script. Behind every well-written TV series is months, maybe years of writing. Behind most John Oliver episodes are extensive articles by other journalists; in this case, they’re the researchers.


 

My conclusion is that despite the immediate popularity of shallow content, this is, now more than ever, a writer’s world. Every single day, we type out emails. We tweet. We write witty Facebook statuses that we want everyone to like. We read. We write.  The amount of text, the amount of writing, that a human being is exposed to hasn’t shrunk; it’s grown. And in this mess of tweets and 9gag memes and cat pictures, good writing, good longform writing, stands out, sending out ripple effects that spread across the entirety of that curve. The written word has been the backbone of human civilization for thousands of years – it’s not going to go away just because someone figured out that moving images are more popular.

Now, if you’ll excuse me, I’m going to get back to my writing.

By

Read More

Our Facebook newsfeeds can be better than RSS readers ever were

Like many others, I used to get up in the morning and check my RSS feeds. It didn’t matter where these feeds were. I always had them on call.

Now I get up in the morning and check my Facebook. And there we are: the daily dose of politics, courtesy of NYT, Al-Jazeera, the Daily Mirror, Colombo Telegraph. The best of Techcrunch and Wired. A dose of the Anti-Media. A few scattered longform blogs and a smattering of what a very limited circle of my friends are up to.

There you go. News.

I’m not alone. Ever since Facebook added the Follow mechanic, the screen I see when I log on has been changing from random noise and those horribly shareable Aunty Acid pictures to a highly curated news website, customized and served up for my consumption.

Why it works so well, and why we like it more than ye average news site, is because of a curious psychological effect. I follow certain individuals who have three things in common:

a) They constantly seek new information
b) Because we’re friends, they tend to share more tastes or personality traits with me than the rest of my online associations
c) Each of them has a dedicated interest in one or more subjects than I care about

Anything shared by these people is inherently given a significantly higher weightage in my mind. Their untold endorsement (that this content is worth reading), in my mind, is practically a trusted judgement.

And again, because we share certain tastes and traits, I often find that I do like what they push out onto Facebook. This in turn reinforces my acceptance of their endorsement. Now I’m even more likely to click on what they share. It’s a feedback cycle that builds my mental image of these people as a fantastic source of news. Add to these a couple of follows to the brands you absolutely know and trust — like Wired — and you’re good to go.

This is why I’m really skeptical of news aggregation sites. Facebook’s personalized selection is doing such a good job of aggregating news – especially spreading memes – that plain old websites have it really tough these days.

Mind you, Facebook’s echo valley effect is a constant threat. This is groupthink, and groupthink is dangerous. To counter this I follow people who have drastically different views on the same subject. My mainstream Western Media is balanced with the Anti-Media. My Google Loon and death penalty discussions have hugely vocal for-and-against splits.

It isn’t a completely balanced system. This is because of b) — the similarities in taste and personality that make this interaction possible inevitably lead to, or arise from viewpoints that overlap; therefore, there’s always the danger of finding that all of your friends agree on this one thing (the New Zealand All Blacks, for example).

Nevertheless, Facebook is saving me massive amounts of time and effort.  Try it for yourself.   Follow some brands you like. Unfollow those idiots who reshare soppy love posts, Paulo Coelho quotes – generally avoid anyone who does not force you to learn something new with every other status. Follow a couple of good news brands.

Done right, you don’t have to visit a hundred sites; you don’t even have to visit one local news site every day —  if something serious has happened a whole lot of people will share it. We have a system, and it works.

By

Read More

Timebox, Don’t Multitask.

I, like most people, can’t multi-task for shit.

I had a boss who could multi-task to the level where most of us thought he was insane, or had severe ADHD, or both. His work day, as far as I could see, was a furious flutter of phone calls and scribbled notes. Occasionally someone would drop by at his summons and stand there looking like a fool while waiting for the phone to stop ringing.

Very few individuals can actually multi-task*. As it turns out, we don’t really do two things at once – instead, we constantly switch our attention between different things. Rather than doing two things at once, you oscillate between focusing on one, then the other.  Depending on how fast you can do this, you’re diagnosed with either tunnel vision – very slow switching – or ADHD – very, very fast switching – or being ‘okay’ at work.

There’s a problem with this approach. There is context to everything that we do, and switching tasks requires context switching. This, regardless of how good, bad, male, female, black or white you are, hits you. Hard. Context switching comes at a massive cost – a period in which you doodle around trying to acquire the information needed to pull this new task off.

This is literally a waste of time, and that’s where the feeling of “I did so much but got nothing done today” comes from – your time has been sunk into constantly acquiring, shedding and reacquiring context information. Sometime people get around this – my former boss used copious amounts of notes and relied on people to follow up with him on things – but most often, they don’t.

That’s where timeboxing comes in.

No, not the Dr. Who thing. Timeboxing, to quote the great Pedia of the Wiki, allocates a fixed time period, called a  time box (duh), to each planned activity. Several project management approaches use timeboxing. It is also used for individual use to address personal tasks in a smaller time frame.”

I timebox. I wasn’t aware I was doing this until @dulitharw came along and put a name to it.  It works. Ever since I started it, my productivity seems to have ramped up: I’m getting work done, writing slowly, but steadily (both the blog and a novel on the back burner); learning (cryptography and data science); finding time for a movie / series episode; reading one and a half to two hours a day, every day;  I’ve found time for extracurricular stuff – going out with friends and Toastmasters’ and the planning of complete world domination; AND I’m getting a good six hours of sleep a day.  I actually work less – I write far less that I used to – but my work is getting better.

And this is while knowing for fact that I’m nowhere near even close to optimizing my time management – I’m just scratching the surface. I know, for instance, that I’m spending 10x more time on Facebook than I need to. There’s so much more productivity ahead.

This is in contrast to my previous year – 12-hour workdays in front of the computer; stressed-out weekends; little sleep; no social or extracurricular life; no time for anything or anyone else.  It seems natural to timebox. Even better: it allows me to work around what I think of as my “goofball hours” – zones when I have no productivity whatsoever. I work well from 9 AM to 12 PM; productivity plummets at 1 PM and hits rock bottom at 3; by 4 PM I’m up again, until 7; then there’s downtime until 10 PM. Setting timeboxes for my day allows me to neatly work around these zones. Mind you, it’s not perfect, but I put that more on my failing than on the system. For instance, I’ve been unable to ‘box my way into gymming. That will hopefully change in the coming month(s).

In fact, now that I think of it, school was run along the same lines: periods were time boxes. You could clear the maths away and get out the english books, and even though we had too many subjects, it really worked.  You were never expected to do your English homework while jotting down trigonometry notes and chatting to a colleague about isomerism. That doesn’t happen. One thing at a time, dear donkey.

-28404 (1)

Bollocks.

Where then, does this “multitasking” business come from? I believe that’s something we’ve mistakenly absorbed from the world of computers. Back in the 60’s, computer processors had a feature called multiprogramming; while waiting for the slow-as-fuck printer or hard drive to respond, they could switch to another task and hammer away on that until whatever they were waiting on arrived.

Computers being what they were, this soon evolved to a state where processors would no longer work on tasks sequentially – new things would constantly start and interrupt things that were being worked on, and the computer processor would switch to so many different things so fast that it seemed like it was doing multiple tasks at the same time.  Marketing caught onto this and touted this as “multitasking”.  Corporate types wanting to boast about how much work they were doing caught on to this as well and started spreading the myth downwards.

TL;DR? Forget multitasking. Chances are you timebox as well, but if you don’t –  try it.   I’m no guru, but this one works for me – it’ll work for you as well.

*Science says  the human brain is simply not meant for multi-tasking. This is especially true if you buy into the “women can multitask, men can’t” myth that’s been floating around for years. I looked this up while trying to find solutions for this. Apparently yes, you can  have a phone conversation while watching the All Blacks play. You can drive a car while humming the soundtrack to Requiem For a Dream. You can fold laundry while riding a bicycle. Great. And that’s because for you, only one of these require significant cognitive effort. Driving a car is initially quite difficult: you won’t be humming anything when you’re first sorting out the clutch. But once that’s practically muscle memory, your brain is free to move on to other things. It’s the same with the rugby match – you’ll be paying significantly less attention either to the play or to the person on the end of the phone call. In short, you’re half-assing two or more things, not recommended in a work situation.  This is old knowledge, but for some reason surprisingly large amounts of people  still buy into the myth of multi-tasking.

Additional reading (with slightly more science):

By

Read More

× Close