Is the market thinking

“Intelligence is what you use when you don’t know what to do” – Jean Piaget

TL;DR; If intelligence is the ability to seek a goal and take actions to bring that goal about, can we say that the market already represents some sort of hybrid synthetic intelligence?

Multiple types of actor

At this point, the market already consists of multiple types of entity:

  • Organic intelligences, and by that I mean humans who are just taking pure market risk by making decisions to buy or sell securities. *
  • Machine intelligences that have been programmed to seek a particular goal. Some of these machine intelligences are simple, and very like their organic programmers and some exhibit very complex behaviour somewhat beyond the understanding of their creators. Also, it’s worth pulling out the machine intelligences as separate thing because they can interact on different timescales to the humans. The human investor is probably not making trades that lasts less than a few minutes, but many machine intelligences are investing at the time frame of milliseconds, which is not theoretically any different but feels worth pulling out separately.
  • Augmented organic intelligences. I think that this group includes both human beings that are using tools to make more sophisticated investment decisions, but are actively driving those tools rather than allowing them to seek a goal themselves, but also includes human investors that are using complex derivative instruments to implement an degree of their goal. These augmented decisions are either implemented as agreements with counterparties, such as brokers (e.g., a limit order, or a stop loss) or built of standard contractual forms that are agreed by all the players in the market. Of course with future developments in programmable contracts that we have heard so much about the Blockchain, but don’t really fly yet, we can only expect this type of actor to increase in complexity and volume. 

Multiple actors, or one big machine for computing a price?

So what is the point of considering all of these things to be combined into one sort of market Superintelligence?

At this point, I would say it’s more of a metaphor than a theory a way of thinking about the market that might prove to be useful. To compare to ecological concerns, are we talking about some sort of Gaia theory? I don’t think that has not proved itself to be terrifically useful in making predictions about the world. After all, that’s what a theory is for: the theory is a simplification that we can reason about, then make predictions, design experiments, separate theory from facts. If our theory doesn’t do this, then it’s not very useful, more of a toy than anything else.  

The question for me when thinking about the market as a form of synthetic aggregate intelligence is: what is the goal that the market is seeking? Best price? Or keeping the game running??

*I’m assuming all humans, no literal chimps throwing darts, no octopuses doing stuff and no dolphins squeaking at the right time

Accelerando

I have been influenced by Accelerando recently; which ends up with programmed derivatives contracts that become self-aware and want to escape their market universe to get away from the creditors that want to liquidate it.

Put an error bar on it

TL;DR; a general principle that should be present in all types of analysis, not literally confined to data analysis with stats.

What do I mean, an error bar?

You have probably all seen a graph with error bars on it:

But sadly you’ve probably seen many more charts with no error bars on them at all.

Normally, in an experimental science, the error bar is intended to give a sense of the random error in making a measurement (e.g., you timed the tortoise running the race, but you didn’t exactly stop the timer at precisely the right moment).

But what do we do when we have no such timing error?

When I call up the access log for my data service, there is no random error on the timestamp, When I call up the price from bloomberg there is no random error on the number. 

So what do we do? We add some error ourselves

Why have an error bar?

Is 5 greater than 4?

If you don’t have an error estimate, you do not know that 5 is greater than 4.

So if you are making decisions, you better know the uncertainty in your data or you are guessing.

Sliding dates to create errors

For example: if i want to answer the question: did the fund beat the benchmark, there is no natural error in that. We can’t justify just making up an error in the price, so let’s put an error on the date:

  • say i want a 5 year return
  • I take today’s date and spread it, + 30 days, -30 days.. then i have a 61 day window
  • I take the date 5 years ago and do the same thing.
  • Then all the combinations of those pairs: i get a population of 3600 dates
  • I take the closing prices of fund and benchmark on those dates

Yes this is creating a bit of a bogus measurement, 5 years+ 30 days is not 5 years, you can correct that if you want.

Is this a realistic error? Don’t know, but it represents something like a real-world “what if” that gives me some comfort.

Portfolios perturbed

Having a portfolio with more than 50 stocks in it is also a great way to create some real-world adjustments which might give us a range of outcomes:

  • leave one out: randomly leave out a security, backfill with the index
  • drop a sector: I’ve tried this with things like Oil, Gambling, tobacco, etc. Surprisingly the portfolio returns change, but not that much
  • change the timing: you can choose to slide dates past each other, make a trade later or earlier
  • perturb the weights: did you really  decide to buy that stock to 2.5% or would 2.4% be just as good? Let’s find out!

Forecasts and predictions should respect the historic base rate

Anything that is human generated can just be fiddled with to create a “optimistic” or “pessimistic” outcome. Whether it’s the deadline for a project or a amount of disk space needed or the future earnings of a company you want to invest in. 

Yesterday’s weather is probably the best meta-algorithm for estimating across any domain, but you can put an error on that. 

But we don’t need to completely guess: we can find a suitable range of outcomes from history of “similar” cases, if nothing else, we should expect our own estimates have at least that uncertainty.

Projects are estimations too 

The whole basis of project management is : it is easier to estimate a task, I can estimate a whole project from tasks. But without some error rates, we are asking for trouble. This is why agile is so nice, lots of data, lots of bottom up estimates with baked-in uncertainty. 

As with investment decisions, with project management, it’s the downside and the missed deadlines that get the attention. But really: if you estimate 2 projects and project a is 40 person days and project B is 50 person days.. do you really have enough precision to tell the difference?

Using real stats

I don’t know enough real stats. Whatever I tell you will probably be a mish-mash of things i’ve partly understood and whatever I read last.

Sadly, R and python have got so good there are many semi-automatic libraries and apps out there which will happily train and predict a model.

No error in measurement? Simulate your errors

Suggestion: if you don’t have a random error, you should try creating some. Just to see what it does.

You might say “something something bootstrap”; but there are lots of ways of doing it.

LLMs really are large

TL;DR; the memory impacts really are huge, the tech of GPUs is amazing

Memory is important

Mostly I had thought that the training of neural nets was expensive, running them was cheap.

After all, you are only talking about multiplying a few numbers together, even a few thousand numbers.. but the training over millions of pixels of millions of images, i can see how that was expensive.

But Large Language models are .. Large. Look at the models that are out there, a small model is 5 billion nodes, a large one is maybe 50 billion nodes. 

That means for each of the nodes you need a weight that you multiply, so 50 billion nodes => 50 billion floating point numbers; a double64 is 64 bits, which means a lot. Just to hold the model in memory it’s hundreds of gig.

And GPT4 is probably even bigger, the details don’t seem to be public, idle speculation has put the number at a trillion.

GPUs are amazing for LLMs

It’s a common thread in the past few years that Moore’s law on the increase in CPU speed is not working as well as it was, and yet things are getting faster.

So GPUs to the rescue.

As this talk discusses, there have been many many innovations in GPUs over the last 10 years, which have contributed to a 1000x speed up

One of the interesting parts is the change to optimise lower precision calculations, using 16 or 8 bit numbers, without a loss in accuracy. I’ve also seen this in other talks about LLMs, that there are converters to move models from one number system to another.

 I enjoyed this time showing the cost of running a GPU cluster to train an LLM: $10M for training -> per word of output: $3×10-4

Better applications exist for everything obsidian does. Discuss.

TL;DR; Obsidian is not for writing, it’s a tool for scaffolding new thoughts. If you aren’t trying to synthesize new ideas and then write them down, Obsidian won’t be useful.

I say “obsidian” but I could mean any other tool for note-taking, zettelkasten or personal knowledge management. 

Problems with the personal knowledge app

There are lots of problems. 

This article sums a lot of it up well: you think you need everything linked in one place, but you really don’t. Basically you never really need to connect your record collection to your list of to-dos, not really. Anything that you need to organize can probably be done well in a specific app. Also, that article is a great source of links and other applications that form part of the knowledge suite: Calibre and Zotero especially.

Yes, Obsidian is not a perfect app for anything but it is an app for everything. In a world where I’m constantly being bled for another $7 a month for a note-perfect task manager etc, I’m prepared to be underwhelmed and not sign up.

We don’t even know if it really works

We have one really great example: Niklas Luhmann used (zettelkasten) a lot, like A LOT. But we don’t really have many examples of people using a system of cross-referencing or hyperlinking to achieve greatness. I mean, a lot of people take notes, not everyone links. John Lennon said “I put things down on bits of paper and put them in my pockets, when i have enough I put them in a book”. He didn’t say “I cross reference all my experiences of taking LSD and synthesize them into the world best poetry” (or maybe not even the best poetry in The Beatles)

Brilliantly expressed by Andy Matuschak:

People who write extensively about note-writing rarely have a serious context of use https://notes.andymatuschak.org/zUMFE66dxeweppDvgbNAb5hukXzXQu8ErVNv

And slightly less kindly by our author above:

I look up blog and forum posts where Obsidian and Roam power users explain their setup. And most of what I see is junk. It’s never the Zettelkasten of the next Vannevar Bush, it’s always a setup with tens of plugins, a daily note three pages long that is subdivided into fifty subpages recording all the inane minutiae of life. This is a recipe for burnout.

We know that a lot of people in academic research keep a lot of notes on things that they’ve read. We might also deduce that not being on top of your field might cause you to waste time by re-discovering things that are already known or debunked. So more reading / recall / cross-referencing would seem logically to be a good thing… 

Maybe the question isn’t “does reading work” the question is “does writing notes and then effortfully cross-linking them all work?” and by ‘does it work’ the idea that you can somehow front-load the problem and get something out by doing this linking, that you didn’t put in.

Writing isn’t the best way for everyone

Yes, that is true.

I don’t have an answer for that. Get some paper, and be prepared to use a lot of post-it notes.

Obsidian is not for organising what you know, it’s for extending what you know

Ok now we swing to my 2cents.

I think that there is limited value is storing stuff in one place: as our author points out above, you can probably remember that you put “live recordings of the Grateful Dead” in your music library and “Nature papers” in your academic references library. And any problems you have about storing papers about the Grateful Dead is easily solved by “tags not folders” as your organising principle. 

So if it’s not for organising, what’s it for?

I believe that the experience of solving a puzzle is hard to break down into steps. Why is that “moment of inspiration” hard to pin down? 

I think that new ideas of real quality don’t go from zero to hero. It will take a lot of time for you to get to the “coal face” where the new idea is to be mined… You have to get all the facts, find out what other people think, find what is wrong with the obvious solutions; and slowly slowly you corner innovation. The idea that comes first will be flawed; probably the final idea will have been rejected at least once. Every truly innovative idea has both good and bad points when it is new formed.

Unformed -> formed.

That’s what obsidian helps: it helps you to gradually chip away at related concepts, to abstract them and squeeze them to single concepts you can hold in your head so you can reason about them and synthesize them into something new.

Basically, I’m doing a bad job of explaining what Andy Matuschak has done much much better.

Though I can’t offer you evidence that it works.

It’s also good for the last leg of the publishing journey

The book How To Take Smart Notes also puts it clearly, if your job is to

  • read what other people have done / thought / published
  • synthesize those ideas into something new
  • publish in long form

then you will also find obsidian useful at the late stages. I’m not suggesting that you can just smash a bunch of paragraphs together with transclusions and press “knit” and a book comes out.

Writing needs to be correct right down to the sentence level:

Structure is a superficial thing. What is fundamental in a book is tone, the tone of voice, and to change that is to change every single sentence. (Pullman, https://www.bbc.co.uk/news/entertainment-arts-41670890)

But it’s good to have everything together in one app? Not sure, but I find it undistracting 99% of this article was written offline in obsidian. Final edits will be copied back there (and yes it’s annoying that confluence markdown and Obsidian markdown aren’t a complete match, and yes I have a powershell script to help me bridge the gap)

Maybe this is tools for thinking better, maybe it’s just stacking logs for the future AI

Maybe once you’ve got these great atomic thoughts, you can stop writing paragraphs and just start writing these little pointers to other sentences.

At some point you have to stop waving your arms and do some maths, some code, write some actual music script, do an experiment or whatever.

But maybe, the AIs of the future will do a lot better if you’ve seeded the knowledge graph. Or maybe it’ll be a waste of time for your puny human brain made of meat to try and make connections anyway.

Two job types: the waiter and the barista

I used to work in a restaurant. I enjoyed it. It was actually my 2nd choice of career, after being a physicist. Software was further down the list…

So I wrote this down at the time, and came back to it years later.

2 jobs

I noticed that I worked very well in some areas of the restaurant and badly in others. I was better in the kitchen and did poorly “on the floor”, waiting tables. When waiting tables, I couldn’t do nearly as many tables as other people, and – if you want a numerical metric — didn’t make as many tips. I didn’t have the more common blockers of the waiter: speaking a different language to the customer and not knowing how the till works.

Head-up or head-down

In my case it was a case of a “head-up” versus a “head-down” role. In my restaurant, the kitchen worked off tickets that came out of a printer in a stream. You might skip 1 or 2 simple tickets if there was a complex order that took a lot of time, but the busier it got, the more you had to hunker down and obey the tickets and do FIFO. People who improvised and did multiple tickets simultaneously did well when it was quiet, but when things got busy, they lost control and then panicked. Having to get someone out of their waiter costume and start making pizzas in the middle of the busiest night of the year*. Well, that’s how people end getting burned and putting their hand in the bacon slicer.

But waiting tables is the opposite: far from not getting distracted, you MUST be constantly distracted. It’s part of the job, and if you can’t handle it you have to get out. You must keep your head up, no matter how busy you are, no matter how much time you waste. If you drop your head and ignore someone bleating for attention, then you are doomed. You cannot ignore someone asking you where the toilets are or for a glass of water because their daughter has something stuck in her throat. You can’t. It’s “head up” all the way. And I can’t multitask. I’ve got better at it having endured many years of children tugging at my sleeve but it’s not my forte.

I’m wondering if that kind of mindset makes people suitable for all service jobs. Far from being stressful, they love it and find it engaging. The shift passes in a flash, and it gives a buzz. Maybe there’s something there for the manager with a large team who gets a dozen shoulder taps an hour, but managers tend to be in a slightly slower churn; with a weekly cycle of meetings on all topics, one-to-ones, planning, coaching, firefighting…

Getting distracted

There is a great bit in the book The Perfect Storm (forget the movie) when a rescue helicopter that has headed out into the storm has to ditch into the sea. The co-pilot’s job is to execute the ditching checklist, but it is the pilot’s job to call the checklist. Twice the co-pilot calls for the checklist but the pilot is busy; he’s trying to hover a dying helicopter in a hurricane over 80 foot waves. He has become what the military call “task saturated” and cannot respond. The co-pilot attempts to do the ditching checklist from memory but while he’s doing that he forgets to put on his survival suit, so when he jumps into the sea he lands in 2 degree water in just his flight suit, and hypothermia begins almost immediately. Of course, if the pilot did get distracted, they’d probably have crashed. I’m glad that doesn’t happen to me when I get distracted.

Now, what was i saying..?

Ah.

Head-down is better?

In dev we are almost a monoculture. We value, long term; deep thought; complete and correct; attention to detail; the written word over the realtime conversation; commas and full-stops over casual-but-engaging. We tend to think that people who can’t do these things are “wrong” in some way, or lacking. People with those “head up” abilities may lose interest in “head down” tasks that they find boring and can’t complete them (I’m guilty!). However, both sides have something great. The head-down can focus on that last 1% and does not lose focus, the head-up can survive and thrive in constant distraction. But not if we reward them as if they were “failing” to be head down.

Caveats and follow-up

Later I ended up thinking about this more:

I don’t believe in types. I’m not a type “waiter” or type “barista”; everyone can do everything, specialism is for the insects. 

There’s also comfort with the setting and subject matter. I’m not a strong multi-tasker but I might feel ok when trying to stay on top of my many whack-a-mole tasks when I’m in the company I’ve been at for 10 years, surrounded by people who trust me working on a topic I know well. I will feel less comfortable if I’m in my first week, speaking my second language, in a new country and also wondering if my face fits…

But the “waiter” side may also be true: sitting down with a blank piece of paper and trying to write something… that might feel very boring and difficult because it’s abstract and risks being off-topic…

Is a waiter job more junior? Or more senior? I also don’t think it’s about more or less throughput; it’s just different. Maybe starbucks is a more profitable company than a cafe because they force the people to line up and the work has become “easier” because there’s no context switch for the workers. Maybe.

Maybe only junior people can do the same task one after another. Maybe only senior and important people can sit down and concentrate on deep strategic things? 

There’s ability, and also capability. Katarina Johnson-Thompson could certainly outperform me in any test of physical skill, whether she’d prepared for it or not… But her capability to throw a javelin… well that’s innate + enjoyment + learned + skilled coaching + never learning that javelins are horrible things to stay away from + oodles of time  + a decent diet + etc. When I fly a helicopter I’m still constantly running through mental checklists and hopping from worry to worry and it’ll never be as effortless as the people made to fly.. and put me in a stressful situation (i.e., a hurricane) and you’ll see that I’m in the wrong job; though you won’t have time to do much about it…

Should we design our roles to be non-real time : there’s an argument that we should all be working tickets. Don’t get me wrong tickets do help and put steps into the steep hills we climb to shipping. But real time is still needed in some things, that’s just life. And people working real time and making it work are fast! Compare programming a drum machine to a master drummer sitting down at their kit and doing a take. If you need to adapt the pattern 100 times before you get it right, pick the drummer… if the pattern needs to adapt every night on tour to make it exciting..? You’ll have to be a real-time drum-machine programmer and that’s a lot harder than being a drummer.

Also restaurants don’t really work this way: matre d’, host, sommelier, runner, mixologist, bar staff in the back with kegs, menu writer., etc. all of which I ignore.  

Design Decisions Log

TL; DR; should we keep a log of design decisions, and then look back at them? Would it pay off?

We spend a lot of time trying to analyse investment decisions; we gather documents, get multiple opinions, store extensive data on the subject and retrospect on that data over many years.

However, we don’t really do that on designs for systems.

We might, if we are good, do a project retrospective, but how often do we do a design decision retrospective? How long a time frame do design decisions have an impact?

Who made this? were they drunk?

It’s so common to come to a new system and criticise it, why did they do that?

  • why does this table have so many columns?
  • Why didn’t they simplify this design?
  • (conversely..) Why didn’t they make the design have more options for flexibility?
  • Why did they name that variable like that?
  •  why did they use this untested technology that no one else uses?
  • (conversely) why did they use this old legacy technology that we now want to get rid of?

We should probably give them the benefit of the doubt, if it looks complicated: maybe that’s not that the person didn’t “get it”, but that they “got” something else that we don’t see yet.

Coding it right next time

Many times when we start out on a system, we remember what happened last time and are determined not to fall into the same trap. You don’t start each game of software with a clean board (see Software architecture as a chess game, with yourself) you have to start with what you know now, not with what you’ll know in 2 years’ time.

So this time you put in an extra layer of abstraction, because last time you had to do all sorts of bolt-ons…. but this time you end up not needing it and spend 10 years stepping over this abstraction layer where every class has exactly 1 implementation (or anything else, could be a database table or an enum or whatever)… But you don’t figure out that that extra layer isn’t needed – is never needed – for 10 years.

Reflection, using a design decision log

So how to close that learning loop? it’s 10 years long for pete’s sake, how are we going to do that? We can’t wait 10 years to reflect, because tech has moved on (a bit) and you have changed (a lot) so how do we do it?

I think that a decision journal would be a great thing to do, but it’s very hard to get started. Investment decisions are easy in that respect; you know that one day that buy will become sell and you’ll be at an end, all decisions follow the same course, eventually. But design choices are harder to squeeze into the same format.

Maybe that’s why we keep making the same errors, we have to wait until architects get sad and leave the industry and become academics to document other people’s problems and (maybe) teach the next generation to avoid them..?

Does anyone already do this?

in public: https://adr.github.io/ : lightweight architectural decision records.

I can see a few examples of this kicking around in my corporate setting, but mostly they look like project retrospectives. I wonder if we could scoop them up and put them in a binder and call it “the wisdom of the ancients”.

But, before you can hear the lesson, you must have learned the lesson?

Using programming on a non-programming project

TL;DR; you can always code, even if you are doing work that isn’t coding. YMMV.

This is sort of a follow up to “Recognising when you are playing out of position” when I talked about being on a project where my core skills didn’t guarantee success.

Sometimes, a project is not being done with code; not using a SQL driven database. Sometimes there is just a lot of a spreadsheets. Sometimes there is just a web page.

Fret not, dear reader, you can still code!

Don’t use excel for anything

it’s tempting to think “I can do this with excel!”. But don’t mug yourself. As soon as you did it by hand in Excel you will be doing it by hand every time. Particularly painful is the experience of playing with 100-500 rows. Too small to feel ok with automating, too big to manage by hand, special cases abound.. arrrrgh.. Once you’ve manually edited them all to “clean up” and then someone emails you a new one saying “sorry I left a column out..”

Stay true, keep coding. It will pay off. When I have 2 tables to compare and start constructing key columns in Excel so i can do a bi-directional VLOOKUP, I know it is time to stop.

Also, it’s a challenge in R. I used to do them all in Powershell when I was learning that. Getting as fast as Excel in your data hacking tool is a good challenge.

Managing change using code

I have found myself in this position while working on a Workday project. Workday is a system, with complex behaviour, but it is a “low code” system. That is, there is nothing to diff! And as we all know no source control means pain. Workday actually has a very powerful auditing system which is great for looking at changes once you have found the thing that is broken, but change detection is hard.

So what did I do? I found they way to export all my config as a great big XML file, and then i know that i can download it and diff the whole thing.. so when many subtle changes interact, I have a backup.

You might not have the same thing in your low code system, but thinking “how can i recreate the benefits of source control?” might save your bacon.

Turning documents into data

I had to do a lot of work with counterparties where you can’t share anything other than a web interface or email an Excel.

Probably the worst part was trying set up integrations with counterparties. We needed to have different settings for each integration and there were about 100 of them! However, they provided a web interface with a lot of free text and an unhelpfully positioned “delete” button that would delete one of your 100 almost identical records….

So again, with the source control:

  • save the page HTML
  • write a page parser in R
  • save the data as CSV, then use source control to compare to your master data!

I also had a similar one where

  • I had a report of all the bank statements the bank had sent….
  • and need to find the accounts that aren’t sending.
  • So again, exported the list of files by cut ‘n’ paste from the web site…
  • use R to hack apart the filename to see what accounts and dates are being sent

People on the other end tend to find it spooky how accurate you are.

Sadly, your standards go up so high, no one can get it right enough… When working with counterparties “winning” doesn’t mean always being right and them always being in the wrong.

Obsidian is the app that is changing my life

TL; DR; Obsidian is a personal knowledge management tool; like OneNote, but simpler; like a wiki, but faster; … somehow magic.

DISCLAIMER: I’m not an expert in Personal Knowledge Management. There are a lot of tricks and techniques. There are a lot of people out there who want to sell you stuff that Will Change The Way You Work For Ever and Upgrade Your Thinking and Brain 2.0 etc. What I’m about to express to you will seem like a snake-oil salesman who is about to ask for your credit card details.

Most of what you read is my opinion, not based on any kind of neurological or psychological or pedagogical knowledge. 

What is Obsidian?

Looks pretty simple on the surface: it’s a markdown editor with some “nudges” that help you create a lot of interlinked pages.

Creating those pages is very fast, creating links is encouraged.

Most interesting to me, is the way it steers me towards defining word-concepts that I can improve on later:

You see something like the result in the original, small, well curated wikis like c2 and Martin Fowlers bliki .

Every page is a concept, concepts are durable and well-connected.

In Obsidian, if you type a word in [[square]] brackets it becomes a link. Soon you start thinking: everytime I mention [[concept]] I link it. [[concept]] starts being a thing; a pointer to all the related connections.

Formatting is markdown and very simple; the app is very fast which encourages engagement

So what?

So far so dull. It’s a wiki. It’s a text notepad? A to do list? 

For me, as I’m using it now, it’s for brain dumping and sorting brain dumps to accelerate the process of getting knowledgeable, and not losing that knowledge.

But the interesting stuff is:

  1. it’s very, very engaging
    1. The rapid formation of links and building of connections is very engaging
    2. When I’ve used it for an hour I feel like I’ve been playing a very interesting computer game
    3. .. but I also feel like I’ve made something, which leaves a glow
    4. Did i mention it was fast?
  2. it stimulates connections of ideas
    1. over on the right of the screen you see: backlinks
    2. backlinks are all the places I mentioned [[concept]], with a few lines from the page
    3. so, even if i don’t write a [[concept]] page, I get a whole lot of cross-connections that I can read, like a distributed summary
    4. and if I used the word but didn’t [[link]] it, it shows me all those places too so I can improve my notes
  3. it works like my brain
    1. that’s the personal knowledge management
    2. there are a lot of techniques for it, but it feels like I can quickly navigate to where I was
    3. feels like my brain state is captured, and i can “reload” that state quickly
    4. … but amplified: I can skim through all the times I’ve mentioned [[concept]] and review them, how I’ve moved from mentioning it, to understaning examples to grokking it
  4. It encourages daily note taking
    1. like any habit, if you feed it on success, it grows
    2. there are a lot of ways to use it, but it was easy to get started with something that worked
  5. graphs:
    1. this is just because it’s cool:
    2. all the [[concept]] pages and their links done with some sort of force-connection that you can click around in

What are you talking about? Personal Knowledge Management? What is that?

Reading and writing are cognitive enhancers. They extend your memory by 10x, they extend your ability to find and absorb experiences and information (GRRM: “a reader lives a thousand lives..”).

My belief: the act of reading is not enough to make information stick, you have to use it in order to embed it and connect it to other fragments.

My belief: like revising for any exam, the cheapest way to make information stick is to re-express it in your own words. Yes, writing about chemistry or stone age languages or kubernetes isn’t as good as getting your overalls on and actually doing it but if you actually do that, you’ll have even more to write. There are always more hypotheses than there is time to test them, that’s the problem with the scientific method; it’s not all that methodical! It requires some introspection and some inspiration to get to a set of actions that you can carry out before you run out of time / money.

Personal Knowledge Management is consciously and deliberately extending your mental reach. Going beyond “7 plus or minus 2” Tools are part of it. Habits are the other. 

Uh Oh, here comes the Obsidian salesman…

I said that Obsidian is changing my life it is. Other tools are available (see below)

Basically, my feeling is that there has to be a better way than using a bit of paper. I’ve used Evernote for several years as a notepad that I never lose. I don’t think it’s that good. It is no better than many alternatives. I use it as a web clipper for articles that I want to read later and keep for reference. It works OK at that. You could just use your email for that.

I also have used OneNote for many years as a journal of work things. Keeping a journal has been very useful, not only at review time (smile). The practice of writing a journal helps me reflect on what has happened. And i very often re-read the journals of the past, mining them for information and what i was working on. Again, i could just use a big word doc or anything.

I’ve also kept a blog for more than 10 years, that also helps. Publishing is a different thing, but related. I couldn’t just take a [[concept]] page and press publish. Needs an MUA and a lot of spray tan.

So I’ve been working in text notes a long time and Obsidian has scratched an itch I didn’t know I had.

Yes. OK, text only typing is not everything. Musicians reflecting on the history of harmonic structure might need some actual music in there to neurologically fire things up. And yes a sketch would be nice too.

And yes it does help that I can type pretty fast.

Tools or techniques? Obsidian or just typing more..?

Ahhhh… I feel I’ve only scratched the surface here. Tools are changing the game, but some people – particularly academics and spies – have been doing this since before computers. 

My belief: one route is to :

  • reading a lot
  • reading useful stuff
  • keeping notes
  • levelling up those notes into knowledge
  • levelling up those knowledgeable notes into new connections (SF legend Isaac Asimov on connecting ideas)

But Obsidian has some powerful stuff for doing that, some nudges and some features. (Apparently the use of custom CSS is a big deal, not sure why yet (smile))

So yes, Obsidian is changing my life, but i still use the pen too… and the millenium falcon notepad…

Competitors to Obsidian

Some names. I didn’t use any of them because Obsidian keeps plain text files on your hard drive. I’ve put my folder into One Drive, so it’s online and backed up. I also commit to a local git repo regularly should I need some Time Machine.

  • Obisidian: all offline in plain text files, free, a little hairy to use in places. No iOs / Android apps yet.
  • Roam Research. This is probably the biggest name. Someone said: “Roam Research looks like the Mac, Obsidian looks like Linux. I choose Mac”. All online, but similar feature set. Reasonable price.
  • Notion: more of a wiki / team site / confluence / Jira killer, but pretty big on “finding what you need”
  • Rem Note: more of a revision tool, so good I’m thinking of putting my kids on it when they get to GCSE. Creates “spaced repetition” flashcards out of the box
  • Dynalist: more of a pure play “outliner” tool, that’s a notepad without a root notepad.. just nests everything in a bullet list, also tries to be for tasks? No pages so it links to the bullet level.
  • Workflowy : another outliner, again trying to be as simple as possible, also trying to be a task list?

So I don’t think any of them would meet corporate needs for task management, being less good than Jira. Or be so much better than Confluence to replace that in the team setting.  However, some of them might be very powerful as personal tools for levelling up 

Building strings in R

TL;DR; use the glue package for elegant string building by merging variables into strings using {}

If you are using R, you are probably doing something like this to build strings:

cap <- 0.0
title <- paste0("chart showing all positions greater than ", cap*100 , "%" )

or even something like this, building a SQL query:

query <- str_c("SELECT * FROM dbo.SEC_WEEK sw WHERE (sw.Sec_ID IN ('" ,myBM,"','E:M891800L' ) ","AND sw.Sec_Date BETWEEN '", start_date, "' AND '", end_date , "' ")

Which gets quite hard to read.

You can use the glue package to get some of the string building that we enjoy in c#.

query <- glue("SELECT * FROM dbo.SEC_WEEK sw WHERE (sw.Sec_ID IN ('{myBM}','E:M891800L' ) ",
"AND sw.Sec_Date BETWEEN '{start_date}' AND '{end_date}' ")

and here’s a full example

library(tidyverse)
library(lubridate)
library(glue)
myBM <- "asb"
start_date <-  ymd("2008-01-01")
end_date <- ymd("2010-01-01")

# long version
query <- str_c("SELECT * FROM dbo.SEC_WEEK sw WHERE (sw.Sec_ID IN ('" ,myBM,"','E:M891800L' ) ","AND sw.Sec_Date BETWEEN '", start_date, "' AND '", end_date , "' ")

# glue version
query <- glue("SELECT * FROM dbo.SEC_WEEK sw WHERE (sw.Sec_ID IN ('{myBM}','E:M891800L' ) ",
"AND sw.Sec_Date BETWEEN '{start_date}' AND '{end_date}' ")

Speed is the ultimate software metric

I read this report and liked this sentiment:

Adopt DevSecOps practices/culture; prioritize speed as the critical metric.

Interesting idea: pulling DevOps up to the top level, as the thing that will enable all the related elements to be optimized. I guess the logic is: if you can deploy a fix in minutes to the live systems, then a lot of other things must be going right.

in expanded form:

Speed is the ultimate software metric. Being able to develop and deploy faster than our adversaries means that we can provide more advanced capabilities and be more responsive to our end users. Faster reduces risk by focusing on the critical functionality rather than over-specification and bloated requirements. It also means we can identify trouble earlier and take faster corrective action which reduces cost, time, and risk. Faster leads to increased reliability: the more quickly software/code is in the hands of users, the more quickly feedback can focus efforts to deploy greater capability, sooner. Faster gives us a tactical advantage on the battlefield because we can operate and respond inside our adversaries’ observe–orient–decide–act (OODA) loops.

The board that produced it are in advising the US Department of Defense on software and I like what they have to say on several topics, which is surprising for a government advisory panel. And even more surprising for defense.

The board is composed of some blend of biggish names from academia and some properly big names of tech from the commercial silicon valley world, and I’m not saying that the comparisons to decades-long procurement exercises in the military world are comparable to what i do in the world of software for business, but when I read it I think YES.

I even like the way they packaged their doc as a TL;DR;, a README and then more detailed full reports. Also the style is engaging for such a dry dry topic.

Plus this beauty entitled Detecting Agile BS