An introduction to the history of computing

image courtesy of stable diffusion (image prompt: “history of computing as a story of time and space in the style of dali”)

We were asked to write the ‘computing and cloud computing’ chapter of an upcoming OUP book on FinTech law, and I was asked to write the ‘history of computing’ section (with a 2,000 word limit!).

Here’s what I put together, not perfect, but I hope helps someone orientate themselves.

History of computing

The modern information age began on 15 November 1971 with the announcement by Intel of the ‘4004’ general purpose microprocessor, under the helm of Andy Grove. Seven years earlier, another Intel founder, Gordon Moore, put forth a ‘law’ that the density of transistors on an integrated circuit doubles every year. This timeframe was later amended to every two years.

There are three remarkable features of Moore’s Law. The first is that it has held true for 50 years, from 2300 transistors on the ‘4004’ in 1971 through to 50 million or so in a modern CPU. The second is that the rate of increase is logarithmic, increasing at an ever-growing compounding rate. The third is that it is not at all a ‘law’ – there is nothing written in the stars that processing technology must progress at this rate. Moore’s ‘Law’ is a prediction and a target, the culmination of marginal gains of human ingenuity and insight arising from the dogged pursuit of advancing calculation power; a testament to human determination and target fixation, which has brought humanity millions-fold improvements in computational power.

Moore’s ‘Law’ is a prediction and a target, the culmination of marginal gains of human ingenuity and insight arising from the dogged pursuit of advancing calculation power; a testament to human determination and target fixation, which has brought humanity millions-fold improvements in computational power.

The rest of this section will address what humans have done with this dividend of processing power. There are three main themes, two technological and one a business model. The first theme encompasses technologies that allow the aggregation of processing between different locations – be it network connectivity, the internet, multi-core computing, or parallel processing. These are all ways for groups of processors to compute more than they could individually. The second theme is increasing levels of abstraction: more and more levels of software and framework, taking the user and developer further and further from the transistors and memory blocks, allowing them to stand on the shoulders of giants, and achieve more with less. These advancements include operating systems (in particular Linux), programming languages, database products, APIs and open source software. The third theme comprises the business model of renting computing power only when it is needed (‘cloud computing’) and the swift expansion of that service from pure processing-as-a-service into everything-as-a-service – with all the technology advancements set out above available on a ‘pay-per-second’ basis with no set up costs, and fees accruing only when you are using it.

1.1. The classic history

In general, two simultaneous changes followed from the increased processing power: putting more transistors into the same large data centres; or putting the transistors into smaller and more portable packages.

Initially computers had a large capital cost, required specialist skills to operate, and had limited availability. They were only available in elite environments such as universities and research labs, and access was time-allocated – people would prepare their punch cards of processing instructions in advance and load them into the computer during their pre-booked slot.
This ‘mainframe’ era of computing saw enormous advances and improvements in processing power, complexity, availability, and stability, yet the ‘mainframe’ remained a destination – a place where humans brought themselves to the computer to do computing tasks. Computing was not available where consumers were, largely because useful computers were too large to be in the home.

The next era of modern computing was the ‘desktop’ era. This saw the cost, size, stability, and user interfaces of computers improve sufficiently, along with the standardisation of components, to put ‘a computer in every home’. Computing became available to business executives, researchers, affluent (and then almost all) western families, parents and children. Computers were used as tools and toys, mostly by individual users, but sometimes for small groups in one location. Computing was now available where consumers were located, but for most of this era there were not adequate network communications to allow regular and abundant interactions between computers.

The ‘mobile’ computing era’s totemic launch came with Steve Jobs’ iPhone announcement in January 2007. Jobs announced the bringing together into one package of the iPod, a cellular phone, and a computer, along with a full phone touchscreen. Each of these became hallmarks of the ‘mobile’ era of computing. This era saw the promise of a computer in everyone’s pocket achieved. The combination of the screen and input methods enabled the screen to be both the input and the display method, allowing it to mimic endless tools, games, websites and widgets. The more powerful development, however, was the subsequent release of an ‘App Store’ of sandboxed apps, allowing the low development cost, fast distribution, in-built charging model, and therefore the mass proliferation, of mobile apps.

1.2. Another perspective

‘Mainframe, desktop, mobile’ is the typical formulation of computing eras in the modern information age and must be understood. It is all focussed on where the computing takes place however.

Another model is to consider the changes in when computing was done through these various stages. In the mainframe era, people performed ‘batch’ computing – preparing their materials, performing their computing, and then returning to their places of work. In the desktop era, people performed deliberate computing – they sat down at their computer to do computing tasks (which of course still happens today). In the mobile era, the real advancement provided by the ‘app’ model was not the flexible location of computing, but that networks were fast and reliable enough, that ‘mainframes’ (now data centres and cloud services, more on this below) were cheap and available enough, and software stacks commoditised enough. Computing became continuous: a continuous stream of on-desktop and mobile inputs (with those edges to the network doing lightweight computing), working together with mainframe-based power-computing which interacts with other users, computers or systems, with the whole arrangement creating an indistinguishable stream of computing activity, whether the user is engaged with an interface or not.

While the desktop-to-mobile transition does not, in itself, need more than ‘just’ smaller processors, touchscreens, batteries and a suitable OS, the transition from deliberate to continuous computing has additional pre-requisites.

Continuous computing means the processing is split between various processing environments, running on various operating systems. This means that rapid data networks, secure communications, identification systems, and trusted computing environments are also needed. These are all complex problems, each with their own industry histories. By the time of widespread mobile computing, these elements however were all either priced as a commodity (e.g. network connectivity), or carried no or minimal direct cost (e.g. public key encryption available through open source software, or access to the iOS App Store for hundreds of dollars).

Another requirement is a central processing computer – a mainframe – which traditionally carried a large upfront cost of buying and operating a mainframe to run the background processing operations. Data centres began a business model innovation of renting computing power on demand, which had the effect of decoupling the large capital costs of owning a mainframe (hardware purchase and replacement, rent, physical security, power, telecoms, cooling, etc.), from the ability to use and access a mainframe. By providing computer power in a scalable manner, without up-front costs (infrastructure-as-a-service, “IaaS”), mainframe suppliers managed in a few short years to abstract an entire industry into a set of APIs and commoditised functions, giving birth to ‘the cloud’.

Further investment and development of this service allowed for other mainframe upfront costs to be avoided: setting up, maintaining, and supporting the operating systems; and ‘basic hygiene’ of a modern computing environment – allowing work on differentiating products and features to begin much earlier. These were the core developments that allowed for the operation of financial services in a ‘cloud’ environment.

Since those building blocks for financial services computing in the cloud environment were put in place (2006 – 2008), most of the changes have been incremental, but compounding. More and more computing operations, operating systems, software suites, databases, and cutting edge analysing and processing such as AI, are available ‘as-a-service’ in what is now sometimes dubbed ‘everything-as-a-service’ – meaning developers can work at higher and higher levels of abstraction away from the core infrastructure – doing more with less, and commoditising parts of software development and software operations.

The increased importance of cloud computing has placed greater focus on security and resilience against downtime, data loss, data manipulation, data leakage, errors, privilege escalations, etc. – leading to increased security processes in cloud environments, simpler arrangements to implement those processes, and automated monitoring, such as intrusion detection. As desktops, laptops, tablets and phones became more powerful, that local processing power was increasingly used to deliver rich user experiences, and in-built hardware security features (such as fingerprint and face scanning) were deployed to help authenticate users and securely communicate that authentication to the data centre. These are all iterative in core technology terms (as compared to the pre-requisites set out above) – but they were necessary steps for the affordable, mass-market deployment of many of the products and services we associate with ‘cloud-based financial services’.

1.3. What next?

The range and complexity of software services available via the cloud will continue to grow, as the main suppliers compete. For the purposes of ‘FinTech in the cloud’ we are likely to see iterative and incremental steps to improve the service.

There are two main disruptive threats to this cloud computing model. The first is truly distributed processing, in which financial services are established that don’t use a centralised organisation, and the need to operate in a transparent way that removes all and any authority to change the software code, or how it is run. This requires a different type of computing environment, a problem solved today by having redundant duplicative computing and verification take place, with a consensus established as to the correct answer (e.g. via Ethereum) – but perhaps with other solutions in the future. It is, however, hard to see how a centralised cloud offering, as we currently understand it, could perform the role of a decentralised computing network in this way.

The second disruptive force is quantum computing which, if established, would offer a very significant increase in processing power, far beyond the pace of Moore’s Law. This would have the possibility of providing an abundance of processing power which unlocks categorical changes in how cloud computing is priced or what it can be used for, but would take time to develop, productise, and turn into a commoditised cloud-based offering, and for developer users to understand and build on the new algorithm types and capabilities.

1.4. Summary

Computing over the last 50 years has become faster and smaller, but also cheaper and more commoditised. A mesh of technologies has changed the experience of computing from a serious academic pursuit, to the background vibrations of a busy life. The builders of these technologies have become the largest and most powerful companies in the world, yet even their growth is eclipsed by the advances in the technology itself. The culmination of the emergence of a common platform and set of standards is that it allows software, businesses, banks, messaging platforms, payment systems, and more to be deployed globally, instantly, securely, on entirely rented computing power.

Career Information

Legal tech reading list

I was recently asked what I would suggest reading to ‘get into’ legal tech. It’s a good question – but I think slightly wrong-headed. You’ll ‘get into’ legal tech by ‘doing’ it not by reading it. Books are great (shoulders of giants, etc.), but where the aim is to do a thing, there’s no substitute for actually doing it. Books will of course help you go farther, faster, and get less hurt along the way.

This list will always be incomplete, but it’s very incomplete for the time being. My aim is to expand this out into a general reading list in time.

Product development

Legal profession

Tech sales and adoption



Business Career Information LegalTech

How to get into ‘legal tech’

Lots of recent conversations have made me think it would be helpful to collect my thoughts on how to ‘get into legal tech’.

I think some context is needed though before we get to ‘how’ part of this:

  • what do I mean by legal tech;
  • why get into legal tech;
  • what do people doing legal tech do; and
  • how do you get to do those things (and how can you get good at it).

I’m mainly writing this for a younger version of myself. Hopefully helping some people leap some of their career, hopefully emboldening some people, and hopefully surfacing some information that isn’t normally written down publicly.

What is legal tech?

This is an unnecessarily contentious question that people waste a lot of time on Twitter and conference stages defining.

Some people define legal tech so narrowly that it almost doesn’t exist (ie. technology that only solves legal user stories, and so if the technology is also useful outside of the ‘legal domain’ it is not legal tech). I don’t see the point of this definition, unless the point is that any given technology is normally useful across industries. It’s also tempting to take the opposite approach – to point to the long and rich history of lawyers embracing and using technology, and call any use of technology by lawyers legal tech – including sending emails, or typing a letter in MS Word. I think this definition is more helpful because it is a something, not a nothing.

If we go further, I think we can identify the ‘something’ at the core of that definition which gives us a useful platform to use for the rest of this essay. I might later write more on this, as it could be an essay in itself.

Stealing a line from someone that won’t mind me taking it: ‘tech’ is anything that removes the need for humans or makes things faster, cheaper, more secure or powerful. To me you’re doing legal tech anytime you’re working on a legal problem and make or use some tech which wasn’t in your or your peers’ baseline method for working on the problem. Note that this is a relative definition, and it keeps moving. Efficient methods that become the way that everyone does it, stop becoming legal tech at some point.

So, using the telephone, or email, or a word processor: no longer legal tech. Automated drafting, ML based meaning extraction, scripted approval and signature flow systems, ‘no-code’ deal flow platforms: all definitely legal tech. e-billing of legal matters, automated client on-boarding and KYC, matter management systems: also legal tech.

So this is a broad definition, but it doesn’t cover everything. For me, to be legal tech there still has to be a way that reasonable competent people would still attempt to do the task without the legal tech tool. Once that method is pervasive, it’s not legal tech any more – it’s just the way.

Why do legal tech?

This is a hugely important aspect, don’t just skip to the how.

You only get one life, and it should be fulfilling. There is a tension here between following your passions, and following your competencies. Common advice is to ‘follow your passion’. On the whole this is poor advice, and you are better served becoming excellent at stuff, and you’ll then become passionate about being a leader in your field (ie. commanding top pay, having agency and autonomy as to how you work, being respected by your peers and clients, understand the methods well enough to be creative and groundbreaking, etc.) These are more rewarding, on the whole, than a middling position in the field that you are, or had, a passion for. The passion grows anyway (see here for more on this).

I’ll still answer the ‘why legal tech?’ question for completeness, but it comes with the heavy caveat above.

To me, working in legal tech raises the stakes from providing classic legal services. It’s legal services with added leverage. There is more upfront cost, compared to the late-majority / post-chasm way of solving the same problem, but also more potential upside. You get it wrong, you get no customers – sure, you learned some lessons, but you lost money. You get it right, there is far more opportunity to delight clients, earn revenue, sweep the market, make a name for yourself, etc. Getting it right is value creating (not zero-sum), meaning you can both deliver more value to users & clients, and capture more value for yourself. Higher variance, higher risk, but more reward if it goes right.

I get an intrinsic, pleasure out of solving a problem through a new process, that is efficient or elegant. I am in flow when working on these sorts of problems, and time disappears, I think in ways I can’t properly recall or describe, etc. This led to me spending quite a bit of time as a kid hacking around music, file sharing, Photoshop, Dreamweaver, remote controls, modems, PC parts, etc. etc. I built up competencies in being a heavy user, and also a lightweight builder, of tech stuff.

I also get an extrinsic pleasure from showing this stuff to people – seeing delight on people’s faces from surprisingly good solutions, being regarded as having a good shot at doing things that other people can’t: taking the problem from 0 to 1, getting a 10x solution, etc. These are vain thoughts, maybe not even healthy thoughts, and I probably have a bias around how often this happens – but I can’t deny the thoughts exist and that they are motivating.

So, I want to maximise impact, get an intrinsic kick out of elegant system based solutions, and am a junkie for adoration: that explains why to do tech, but not legal tech.

The honest answer is I got into legal tech because I got into law first. I went to law school, became a solicitor, and built up those skills and expertise. I then had a chance to differentiate by creating new tech based things within that context. This highlights the importance of skill-stacking. I am a good lawyer. I am a moderately good tech user / hacker. I am in no way a proper coder. I however have a differentiated skillset at that combination of skills (I’ll tackle the old ‘should lawyers code?’ another day). Each of those areas are well-established markets in which it takes years of work, and lots of talent, to stand out. In combination however, you’re competing with far fewer people, but also established methods in one field can be novel and exciting, or be even more effective, in another field. ‘Stacking’ skills, and then using the combination of them to exceed the greats (in utility and in fame) in any one of those areas alone, is an established method (see Range for example).

There are a few specific advantages to the ‘[industry] + tech’ 💬 skill pair: for example, if you have an idea or a hunch, and you are able to yourself go and build a proof of concept, then you don’t need permission, you can decide to break a few rules doing to see if it would work, and the thing you build is probably far better at communicating the idea, than just the description is.

So, my path here happened in part because, from legal, ‘tech’ was a differentiator (even in a technology law practice).

There is an interesting side thought here – what level of intrinsic interest in law is necessary for it to be worth going into legal tech or, even law? You might read what I wrote above, and think the law part is entirely arbitrary – it could be accounting, or surfing, or anything else in there. That’s not quite right. Looping back to the ‘don’t just follow your passion’ bit above: I had an interest, and good indicators I could be good, in law – then made a somewhat arbitrary choice to go into that industry, and then became more interested in legal issues and legal business issues – so that when I started applying a ‘tech’ lens to those legal issues, I care enough about them intrinsically, and understand them enough through experience, to find the work motivating, and to avoid pitfalls. I say this because of one of Justin Kan’s post-mortem reflections on Atrium was about not going into an industry you don’t really care about. His strong tech competencies could not just be turned to the legal industry (which he had not worked in before), and give him intrinsic motivation. This made life hard when success was not coming quickly for Atrium, and so the extrinsic motivation was not forthcoming. I would also suggest, based on his other observations, that more experience in the industry profession 💬 , before he started, would have helped.

What do people who do legal tech do?

This is now getting to the idea that got me thinking about this post.

First, note the breadth of the definition of legal tech above. I have not cheated and defined it broad, so that I have a long list here. I suspect all the people in the list below would say they ‘do’ legal tech (or at least, they’re doing legal tech, when doing these things).

I think all of these personas are doing legal tech:

  • the lawyer who learns how to use visual basic in MS Word to create 100 versions of a contract with basic differences
  • the lawyer next to them that uses and runs that same script, using an Excel table to turn on and off paragraphs, re-drafting paragraphs to have elegant phrasing and syntax with the various variables in play, and maintaining this so it doesn’t break as updated through a project (really, doing any of these)
  • the lawyer who sets up a conditional signing flow, and docs with dynamic / variable elements in DocuSign for a completion
  • the lawyer who reaches out to DocuSign, arranges training for their team on advanced use, organising the session, and supports users on issues and questions
  • the lawyer who writes a project & implementation plan for the re-structuring of a typical but fiddly document processing problem that their team encounters, works with an existing vendor to use existing but unused features to address this, and pilots the idea
  • the lawyer who gets an API key for companies house, or, and sets up an automatically download of documents for KYC, or legislation tracking, etc.
  • the lawyer who identifies a repetitive heavy click, formulaic process, and builds, trains and tests an RPA approach to automate it
  • the angel investor who invests in a legal tech startup
  • the partner who [buys it] the service
  • corporate ventures teams who survey the tech landscape, and work on corporate dev projects in legal tech
  • people running market landscape / ideation / marketplace assessment projects in relation to legal tech
  • people using and building no-code tools, RPA, regex matching patterns, etc. to solve legal problems
  • the software developer at a legal tech firm
  • the software developer at an agency who is working on a project for a legal client
  • consulting on running and operating automated drafting projects and solutions
  • using AI / ML to analyse legal text (whether using TensorFlow, Torch, SpaCY, etc., or off the shelf products, as a basic, or advanced user)

A portion of these are outside the control of just one person (they require budget for example), but quite a few don’t. A portion of these require full time focus, but quite a few don’t. A portion of these require ‘hard’ technical skills, but quite a few don’t and are surprisingly accessible.

Some of these look a lot like just ‘running a business’. If you’re a senior partner in a law firm, you may well be involved in major legal tech initiatives as a stakeholder or for budget approval. This may well be even if you did not operationally ever become involved in such things – still, if you’re buying that stuff, making the final decision on vendors, on project go/no-go, on approval gates, or risk appetite – in my book, you’re also doing legal tech.

Some of these are only legal tech because they happened to be doing law. If you were doing the same of these tasks in another context, it might be called [something else] tech. I don’t think this is problematic however – the skills might have general application, but when you’re doing it in the legal sector, it’s fair to call it legal tech.

How to ‘get into’ legal tech?

I hope the above list of examples of legal tech activities helps to frame this – you can start doing legal tech from most places, I think. This can be broken down into: (a) activities you can do; and (b) decisions you can make.


In a small, non-tech-driven practice, there are opportunities: the marginal difference of small efforts can be higher. Widespread adoption of inefficient processes means ‘simple’ adjustments can reap large returns. Look at what is inefficient, frustrating, repetitive, expensive, or unsatisfying – in your practice, and imagine it being better. Two main ways to go from there:

(1) Without breaking things, with sensitivity to regulatory duties, using dummy data, have a go at fixing it. Get a book on python, learn AutoIT, learn visual basic, learn PowerAutomate, and see if you can make a demo of a better way of doing it. Find an ally to talk to, show them what you’ve done, get their thoughts, and think about who to show it next to.

(2) Read voraciously, attend conferences (online and physical) and seek to understand what commercial products could help. Scout out the market, see if you can get free demos, try them with dummy data, and understand the costs, and work on the business model / RoI model for adoption.

In both cases, keep working on it iteratively, hone that idea, address the key objections, and work towards showing the leadership what you’ve done, how it is better, how the risks are mitigated, and your clear ask for what you want next (budget, time, access to clients, intro to someone, etc.)

In a mid-size and up, more tech-enabled practice – firstly everything set out above can still be done. Secondly – there is probably some more sophisticated tech and software available to you. It almost certainly does more than you are using it for. Learn all of it. The vendor will have manuals, there will be training, they will probably be happy to come in and give extra training, they may well be happy to share their roadmap, or meet to discuss feature requests. Find those opportunities / vendors / problems, turn up, put in the work, and interesting opportunities will follow. Learning how to be the best person in the practice at using some particular software will make you the person people talk to about this stuff: asked to help on the pitch, or if there is another way, or why it doesn’t work. Contribute value, be useful, learn more skills, and a positive feedback cycle will form.


There have always been lots of choices in legal services about who to work for – the market is highly fragmented (and I haven’t looked into it, but I think it has been highly fragmented for a long time in the UK). There is now however also a lot of choice of structure in which you work to do legal tech (LLPs of all sizes, ABS (inc. limited co’s), publicly traded law firms, Big4, start-ups of all sorts, huge tech company…)(e.g. see here, in relation to just one region).

If you look at the list of ‘people doing legal tech‘ above – have a think about which of those you think you would be good at, and map it up against the business structures where their business model involves training and investing in you in these areas (and where solid work would add the most value), and also against particular businesses with synergies to your skills / ambitions, and which are performing well – particularly look at investments in (read the industry press, look at, or set alerts on, Crunchbase). Then simply find a way to work in those places 💬.


You’ll notice a portion of what I put down as people doing legal tech are in senior or stakeholder positions. I think there is a useful observation there – that promotion in an organisation will tend widen the scope of what you’re asked to manage, and it will increasingly include software decisions. So, another ‘way’ to do legal tech becomes – “be involved in strategy” in a legal organisation – and this then will include (now, and increasingly in the future) being involved in the use of tech & software in that strategy. This is not very actionable advice at the start of your career, but I think a neglected observation for people asking this question mid-career.


I hope you’ve found some original and helpful ideas in here. Lots of people seem to define things in a way that makes themselves rare, special or elite. I think high agency people however can actually do legal tech from lots more environments than classic thinking says – and people can make much more difference in the ‘staid, conservative, profession of law’ 💬 than people (particularly young people) think.


How do you stay up to date?

One of my favourite interview questions is to ask people how they stay up to date. I have never heard what I consider a good answer. Smart, successful, well educated, people – but it seems they have a poor information diet.

I am setting out my approach this to try to help people improve here – and open myself up to to feedback.


  • My professional areas are: law, tech, crypto, data. I’m generally interviewing for the role of being a lawyer in the technology sector – so would expect answers to cover ‘law’ and ‘tech’. These are broad areas, which touch on society in lots of ways, which I think is what drives people to think that generalist news sources are sufficient to stay ‘up to date’ in these areas.
  • The specific examples below are from these sectors, formed in the context of practicing law in these areas, so won’t be professionally useful for lots of people. I do think the approach probably applies to any information worker however.
  • I am not so focussed on method: rss, audio, websites, print, email. Yes, there are bonus points for being organised, but that’s not really the point here.
  • I am not against reading junk – I agree to an extent with Taleb’s idea of ‘barbelling’ what you consume (e..g. consume both classics, and junk) – but I am focussed here on the ‘job to be done’ of how to stay up to date and informed.
  • I think that being unstructured in your information diet, or not having thoughts on how to approach this, is a sign of an uncurious, or ill-disciplined, mind. The world is too interesting, too changing, and too noisy, to not have an approach on how you stay up to date with reliable information, and thought provoking analysis of it. An answer that vaguely talks about ‘the news’ is a non-answer, and reflects poorly on the candidate, in my view.
  • I am time poor, so choose to prioritise succinct, high quality, information. Of course, no-one is time poor: we all have the same amount of it, some people have more commitments than others, people make different amounts of pre-commitment, and people ‘spend’ their uncommitted time differently. The general point is that there is so much to do, see and learn in the world, being efficient in your information diet is important to either get more information in the same amount of time, or to spend less time doing so. This is why good information is generally worth paying for. This note doesn’t consider budget very much, but there is also lots of free information listed below.
  • Switching off and protecting your mental health is important. I’m not saying spend all your time staying up to date – digital detoxes, and true leisure time, are good – but when you are reading for information, read the good stuff.
  • I get that an interview is a stressful setting. But I still think this is an easy question, if you have good habits. I don’t see how the stress of an interview could lead someone with good reading habits to give a bland answer. There’s a time and a place for everything – lots of people consume media (even ‘staying up to date’ media) which is personal, private or edgy, and you wouldn’t want to share in a professional setting, and you might panic ‘filtering in in real time’ – but still, if you have good reading habits, there should still be plenty left to talk about.
  • I haven’t listed other more general interesting, high quality, information sources here – focussing on the ‘how to stay up to date’ in legal technology.

Categories of information


In my view, the best, most concise, most consistently high quality information I read is from newsletters, not newspapers. Analysts make a name for themselves and trade on their brand. Yes, you have to pay for this for the full content. As I’ve got older, I’ve become happier paying directly for good content. I appreciate this is a luxury, but it is also an investment. Most of these also have free content too.

  • Tech
    • Stratechery
    • Lenny’s newsletter
    • Light Blue Touchpaper
  • Finance
    • Matt Levine Money Stuff
  • Crypto
    • Pomp
    • The Defiant
  • Law
    • Calleja Consulting


Same benefits as newsletters – just delivered in a different format (and generally free). More and more newsletters and blogs are the same thing delivered in both ways, of course. For example:


Just for information gathering, probably better than newsletters, but you need to put a lot more in, to get anything good out: curating your feed, deciding how much to weight each source, and the incessant nature of the feed meaning it’s not a good way to try to get an overview of things. It is outstanding at throwing lots of ideas into the mix, for you to assess and contemplate, and come to new views. Some of the greatest minds in every field, put their thoughts out for free on Twitter.


Great resource – super high quality information – surfaces lots of information to then go and read into in more detail.

  • Tech
    • Exponent
    • Dithering
  • Finance
    • a16z
    • Infinite Loops
    • Invest Like The Best
    • Panic with Friends
    • Waters Tech
    • vc:20
    • Real Vision
  • Law / Legal Tech
    • Legal Tech Arcade
    • LawNext
    • Technically Legal
    • Evolve The Law
  • Crypto
    • Unchained
    • What Bitcoin Did


Again, very underrated, and I think a part of a solid and respectable answer. For example HackerNews or Slashdot for tech, or a well curated list of subreddits (for any topic) might be some of the best ways to stay up to date and have a diverse set of new sources pushed to you, with (sometimes) high quality discussion surrounding it.

Industry specialist websites

Another great resource – the exact resources will depend on your industry. For example:

  • Market data – Waters Tech
  • Crypto – Coindesk
  • LegalTech – Artificial Lawyer
  • LegalTech – Legal Evolution
  • Law – The Lawyer

Professional subscriptions

These can be very good. The legal ones which I have access to can be a little dry (and very broad) – and so I don’t always believe people when they say they read it a lot. Still, the right sources here, read diligently would be a solid base for legal areas – but you would have to factor in the lag time between new events and these sources having proper write-ups of things. Examples in law:

  • PLC
  • Mondaq


I generally consider this to be a weak answer. Fine for entertainment or general social matters, but to really be up to date in your professional area, I generally find them them (on the whole) too vague, too broad and too superficial. FT is nearly the exception here, which I read a lot, and do pick up some good info, but it’s mostly for entertainment.

Internal training sessions

Yes, these can be very good. But everyone gets them. They is a lot of delay often between an event to the session. It doesn’t show much hunger or proactiveness, and is not enough. Someone needs to run those sessions, and they need to stay up to date – where does that person get their info from?

News Magazines

In my view, these sit somewhere between newspapers and industry specialist sources for usefulness. They are likely to have more detail than the newspapers, but the breadth of topics each magazine needs to cover means that minor but still important topics will slip through, and there will not be enough nuance or opinions on the critical topics. In an interview, I don’t see this as a very impressive answer – not bad, but not great.


Books are great. An important part of the information diet. For ‘staying up to date’ purposes, the delay in books coming out should be balanced by also reading fast information sources – but that helps to create a rounded range of sources and information types.


That’s not market

As I make my start at Deloitte Legal, the ‘what is Four Corners Intelligence‘ conversation keeps coming up.

Wood for the trees

It’s been a really useful exercise in getting clear and focussed on what Four Corners Intelligence is and is not. It has made me realise that for a while we had been operating in a safe-space of people that had seen, touched, and used Four Corners Intelligence – and we had lost our touch at conveying the what and how (and our value proposition) to people new to the product.

The genesis story

One story from the early days of Four Corners Intelligence, shortly after I built it, helps put some colour on this.

We were negotiating for a tier 1 bank, with the market leader on the other side – known for being inflexible in negotiating its standard terms. Some ‘top of house’ relationships meant a deal must be done, but the vendor’s lawyers were resisting reasonable changes.

We were bogged down on what derived data the client would be allowed to make (what research, analytics, indices, products, etc.) – we just wanted a fair position that matched what the client could do with the rest of their database, and was a reasonable market position.

So that’s how I ended up in in 2015 arguing ‘what’s market’ for derived data with the M&A lead of a white shoe US firm, and GC of information services at the market leader. We said tom-ah-toe, they said toe-may-toe, but we definitely didn’t mean the same thing.

It turns out however we had put the clients data licences into Four Corners Intelligence: we had every important clause, of every contract, organised, categorised, analysed and instantly reportable.

It was high stakes – deal team asking why this completion requirement wasn’t finished yet – me, emailing a senior partner and M&A lead on the other side directly – as a lowly associate – telling him he is wrong – but I was right.

I had the proof, in fact, that we were right, line by line.  In fact, that was the easy part, 5 seconds to pull it from Four Corners Intelligence:

Vendor 1: "Derived Data" means any content, information or data created by Customer or any Designated User … that could not be reasonably be reversed engineered by a typical individual client

Vendor 2: Derived Data means Information other than news which has been Manipulated to such a degree that the original cannot be recognised or traced back to us

Vendor 3: Nothing herein shall permit the Recipient to: … (ii) distribute information derived from the Rates where this amounts to or could be used as a direct substitute for the Rates and Service provided under this Agreement.

Vendor 4: the know-how, copyright and other intellectual property rights, database rights and all other rights of whatever nature in and to the data resulting from analytics performed by Customer on the Data are the property of and vested in Customer; provided that in no event shall any of the Data be identifiable or traceable from the resultant data or the resultant data be capable of use substantially as a substitute for the Data.

Your position: that Licensee may not use or disseminate the Data in any way which could cause the information so used or disseminated, in Licensor's sole good faith judgment, to be a source of or substitute for the Data otherwise required to be supplied by Licensor or its affiliates or available from Licensor or its affiliates.”

The harder part was telling the lead M&A partner:

I think this is an unreasonable definition and not in line with market practice.

We will be required to go to data vendors are ask them not to assert ownership over data which can be calculable or reverse engineered from, or could be used as a direct substitute, to their data.  We have set out another alternative for the drafting below.  If this cannot be accepted, then we need to escalate this.

– Me.
– 2015. Late at night.

Gulp.  Draft.  Re-read.  Click send.  Hope it’s ok….

Long story short we won the point, and it helped the client on the way to getting a good deal.  Backing up our view with data, and getting it over realtime had made all the difference.  Preparation meant the leg work had been done – the message was in fact so powerful, the hard part was delivering it delicately.  The sense of power was a world away from the oh-so-typical ‘defeat by appeal to authority’ that happens in these things, and this was the forge in which ‘Four Corners Intelligence’ started.