Categories
Uncategorized

On reading and the reader

An engineer at Google wrote a wonderful Twitter thread a little while ago on becoming a better reader.

Someone had already summoned the ever-useful Threader app to stitch the thread into a single page, which I sent to Instapaper to read leisurely. I’m quoting some tweets here:

Read Non-fiction for mental models & perspectives on how the world has worked, currently works, and how you can create new works within it.
Read Fiction for emotional models on how to feel deeper within yourself and in relation to others.

Lindy Effect is a theory that longer a non-perishable thing has been in existence, the more likely it is to be in existence.
Old Ideas & Books that are still relevant today, have a lower chance of becoming expired knowledge and are therefore more valuable in the present

5-star reviews can be gamed & don’t tell you why you shouldn’t read the book. Read passionate, constructive, and specific 3 & 4-star reviews to find out what the book lacks to see if what remains is worthwhile.

Cultivate a Triathlon Reading Pace
Learning to see:
– Flat planes of a book to speed up,
– Wet hills to climb up slowly with care, as to not fall back to re-reading,
– Rocky patches with varying shapes for a hop and skip & more dynamic reading speed for the changing terrain.

Some books are meant to be sat and soaked in. Immerse yourself over, many days, weeks, months.

I identify with that last tweet. Once a year I’ll go through the ritual of reading my well worn copy of David Foster Wallace’s Infinite Jest. It’s a river that takes several weeks to ford, during which time I’ll read a few other books in parallel. During those weeks my mind floats in DFW’s alternate universe even as it carries on its business in the real one. As with many great books, each experience is different, each experience uncovers something new, each experience is a different me.

Categories
Data Custody

Dropbox and the Digital Hub

Returning for just a bit to our discussion earlier about Mozilla’s big opportunity to be the trusted, neutral, privacy-first custodian of data in the 21st century. It seems increasingly like Dropbox sees itself as that custodian. [1] It recently released a set of features that points towards that positioning:

  • a password manager
  • support for encrypted files/folders – though in beta at the time of this writing
  • automated backup of common folders – Desktop, Documents and Downloads, which it is calling computer backup
  • digital signatures to sign files stored in Dropbox – also in beta

Dropbox already

  • syncs files in the main Dropbox folder – the core service
  • backs up phone photos
  • allows for collaborative document creation and editing like Google Docs, with its Paper product

This checks off a lot of the services we’d discussed for Mozilla to offer, and is for Dropbox a more ambitious positioning than a cloud storage providers, which Steve Jobs had once dismissed as a ‘feature, not a product’

[1] Without of course Mozilla’s reputation for privacy-consciousness.

Categories
The Dark Forest of the Internet The Next Computer Wellness when Always-On

Mental health tech cannot be like regular tech – Part 4

(Part 3 – My personal experience with seeking help with my mental health)

I’d called it simply Someone To Talk To.

Someone To Talk To – a chatbot

Background:
Several – most? – people with anxiety or mild depression would benefit tremendously from just having someone to talk to. Someone who is receptive and non-judgemental, and doesn’t have any other relationship with the person. It’d be ideal if people had someone like this to talk to in person, but it is impractical: not just because it involves travel for one or both people but also that it requires scheduling and therefore a set time, and that people may feel the need to talk at any point in the day. The mobile phone, a deeply personal device available 24×7, is ideal.

Now, this need goes far beyond mood-logging apps that ask for a rating or emoticon to describe one’s mood. Not only do people’s emotions vary significantly during the day (especially those that are anxious or mildly depressed), not only are they more likely than not to rate their overall day as negative, thereby further feeling down at their own constant negative rating of their days, but more importantly that their needs go beyond such a one-point (or even multi-point) rating. They need an outlet for thoughts and anxieties and fears, to put in words.

Another approach to this is journaling apps, including visually beautiful ones like Day One. Journaling apps take on many types, inluding 5 Minute Journals and Gratitude Journal. But the prospect of filling up a blank screenful, especially day after day, is too often overwhelming for someone who has low emotional bandwidth in the first place.

Here are the characteristics of what I think will make a good Someone To Talk To:

It will be conversational. It’ll feel like a chat (and in fact will be one, with pictures and links sent both ways just like in a regular chat) instead of a set of screens and buttons. The bot will be able to organize what one says, with context, into a journal of sorts for the person’s reference any time later. The person will be able to share with the bot not only text, but also pictures and other media. The bot should be able to respond with at least some context – gentle encouragement or reassurance – but must take great care to not overdo it or sound artificial. Not responding is better than responding like, well, a bot. The bot will also learn about the person over time – what the person seems to like and not, and whether that changes over time, specific people, persons, places that the person refers to and their relationship to the person. The person may give them appellations that are not their real names/descriptions, and that is by choice. The bot will know this.

It will be empathetic and sensitive. It will be designed knowing that a regimental approach of asking for a mood update, or a diary entry at the same time every day, as many apps seem to do, is counter-productive and causes more stress than it relieves. It will be designed knowing that on occasion a notification or picture or video or piece of music can trigger anxiety/sadness/distress in people and they may not even recognise it as such, much less know why. Finally, it will know when the person is looking for a response from it, and when it simply needs to ‘listen’, providing occasional acknowledgement of its ‘presence’.

It will be gently intelligent. While it will often initiate conversation, it won’t ask with every interaction if it felt right, or if the person liked it or not. Being the one expected to be ‘in charge’ of the bot-human relationship can feel challenging. If the bot is designed to be a stand-in for a human companion, it must do better than ask for feedback often (wouldn’t it be stressful to have a friend do so?) Just like making decisions, being made to pass judgement also brings pressure. It will never expect anything from the person it engages with, such as a response in a certain amount of time, or at all. It will also be able to gauge improvement or decline in mood over time and adjust accordingly.

In addition to being empathetic, sensitive and intelligent, it will also be realistic. It will never make empty promises about things always getting

As things turned out, I returned to my day job and never actually worked on this beyond a few simple test versions with a couple of other collaborators. But this, still, is exactly how I’d go about bridging the gap between dealing with mild mental health issues on one’s own, and full-fledged therapy.

(ends)

Categories
The Dark Forest of the Internet The Next Computer Wellness when Always-On

Mental health tech cannot be like regular tech – Part 3

(Part 2 – What happens where you run a mental health service as a Valley-type startup)

Several years ago, I had had first-hand experience with poor mental health and sought help via in-person therapy and medication. During that time, I realized the following: 

  • mental health help is binary: either there’s nothing or there are sessions with a mental health professional
  • the financial cost puts it out of the reach of most people
  • the time investment makes it difficult to balance work, home and this

At the same time, I had observed how my health had deteriorated over time, beginning with mild depressive symptoms with a decrease in drive and discipline. Today, I know that in others depression first manifested as increased anxiety. At that point, you know something’s not right, but doesn’t seem anywhere close to needing to see a psychotherapist. So you plod along until things begin to slide faster. By the time you seek help or someone does on your behalf, you’ve suffered quite a bit. 

Therefore, I understood that

  • people needed – still need – something handy that didn’t warrant full-fledged medical care but provided some minimal level of support for those with mild symptoms. There are now several studies showing that a plurality of the population, especially urban, have poor mental health
  • because people would likely end up being somewhat dependent on it, it needed to be low-overhead enough to be offered free. 
  • and counterintuitively, it needed to be low-tech enough that its limits would be clear right away so that it would not disappoint later.

(Part 4 – My 2017 manifesto for such a service)

Categories
The Dark Forest of the Internet The Next Computer Wellness when Always-On

Mental health tech cannot be like regular tech – Part 2

(Part 1 – the NYT investigates a mental health tech service for practices that aren’t in patients’ best interests)

Continuing our conversation about the NYT’s investigation into the practices at Talkspace, the remote mental health therapy app.

The problem stems from running a healthcare service like this as a business. Worse, a startup. Having spent the vast majority of my professional life at startups, I am all too familiar with the incentives to scale, to perform according to metrics, to employ ‘growth hacks’. When a healthcare startup employs these, the effects are much worse than a game or a social media app.

[the company] has questionable marketing practices and regards treatment transcripts as another data resource to be mined. Their accounts suggest that the needs of a venture capital-backed start-up to grow quickly can sometimes be in conflict with the core values of professional therapy, including strict confidentiality and patient welfare.

In 2015 and 2016, according to four former employees, the company sought to improve its ratings: It asked workers to write positive reviews. One employee said that Talkspace’s head of marketing at the time asked him to compile 100 fake reviews in a Google spreadsheet, so that employees could submit them to app stores.

When convenient, the company spins itself as a healthcare provider: “users can’t delete their transcripts, for example, because they are considered medical records.”, but those transcripts are used by the company for customer engagement and retention:

[A therapist on the platform] said that after she provided a client with links to therapy resources outside of Talkspace, a company representative contacted her, saying she should seek to keep her clients inside the app… “I was like, ‘How do you know I did that?’” Ms. Brennan said. “They said it was private, but it wasn’t.”

Finally, the disconnect between offering a private healthcare service and the exigencies of operating a startup is clear in the company’s public statements:

On Nov. 9, 2016, the morning after the election of Donald Trump, Mr. Frank wrote on Twitter: “Long night in NYC. Woke up this morning to record sales.” The Trump election tweets are examples of the sometimes unfiltered social media presence of Mr. Frank and Talkspace — an irreverence familiar among start-ups but unusual among organizations devoted to mental health care.

In 2016, a man named Ross complained on Twitter that the company’s subway ads “were designed to trigger you into needing their services.” Talkspace’s official Twitter account responded, “Ads for food make people hungry, right?” and added, “I get what you’re saying, Ross, but medical professionals need people to buy things.”

The problems of misincentives of private healthcare are well known and well debated, though nowhere close to being resolved. And startups get a lot more attention than a more traditional healthcare provider would. And Talkspace may have begun with noble ambitions. Regardless, what it has built isn’t anywhere what I think mental health care tech should look like.

How it approaches conflict between what is good for itself versus what is good for its customers – patients – diminishes trust. When it come to data privacy, to marketing, to its incentives for therapists, its responses indicate that it will choose its financial health over people’s mental health. Unfortunately, it’s likely similarly funded venture-backed tech companies in healthcare have similar conflicts and pressure.

(Part 3 – How I’d approached a simple mental-health chat service)

Categories
The Dark Forest of the Internet The Next Computer Wellness when Always-On

Mental health tech cannot be like regular tech – Part 1

Earlier in August, the NYT published a longform investigation into the data and marketing practices of the mental health therapy provider Talkspace. It hit home for me.

So many of the NYT’s discoveries dismayed, even horrified me. I have sought mental health therapy myself several years ago, and about three years ago contemplated working on a different approach for a text-based service for people with anxiety and mild depression. We will talk about the specifics some time later. However, I thought in detail about, and spoke to mental health professionals about the technology, the ethics, the right audience, among other such issues. The way I thought about all of these was so very different from Talkspace.

The first is the way the service is marketed to potential customers and to therapists:

Talkspace is advertised to users as unlimited, “24/7” messaging therapy. “Your therapist will see your messages and respond to you throughout the day,” the company says. Therapists get a different pitch: “Set your business hours, and check in on your clients daily, five days per week.”’

This immediately reminded me of the worst of Groupon, which pitched to users that they’d never have to pay full price for anything again, while pitching to businesses that a single one-time mass discount would convert to loyal users willing to pay, well, full-price.

But Talkspace went beyond, adding this:

Talkspace introduced a new feature: a button that users could press after sending a message that required the therapist to respond within a certain time frame. If the therapists don’t respond in time, their pay can be docked.

Some therapists on the platform were alarmed, in part because the function required them to work on demand, rather than on their own schedule. More significantly, they asked: Is it harmful to give clients with anxiety and boundary issues a button to press for immediate gratification?

First, this reminded me of the worst of Uber. I have been told my more than one driver that they could only turn down a limited number of rider assignments a day, and had to be online at least a certain number of hours a day. After the rollout of the feature where drivers are assigned their next ride towards the end of the ongoing one, those drivers said they couldn’t even take time out to visit the restroom, leave alone lunch and snack breaks, because they also had to reach the next pickup in the time that the Uber app had estimated for the waiting rider.

Troubling as this is, these constraints are worse in the context of mental health because they introduce added anxiety for both the therapist and the patient, the former needing to be equanimous and the latter already suffering from some anxiety/depression.

(Part 2 – more about conflicts)

Categories
The Next Computer Wellness when Always-On

Blue light and depression

There’s evidence that warm lighting aids sleep in humans. Now there’s a connection between the other kind of lighting, ‘blue light’, and depression. Blue light is the kind of light our computer and mobile screens commonly emit.

The researchers exposed lab mice to blue light for two hours every night for three weeks and then watched the mice closely to determine if the light exposure was having any discernable impact on their behavior. The researchers found that initially, the mice showed no signs of change at all. But after three weeks, the mice began to work less hard for a sugar reward and tried less hard to escape when presented with the possibility—both considered to be signs of depression in mice. The researchers also found that the depressive symptoms continued for up to three weeks, even after discontinuation of night light exposure.

And there’s some evidence of the cause

They found a specific kind of light receptor in the mouse retina that led to brain areas associated with mood: the nucleus accumbens and the dorsal perihabenular nucleus. When the researchers disconnected them, the mice did not become depressive due to exposure to blue light at night. The researchers also found that the pathways became much more active when processing blue light at night versus daytime—which, they suggest, explains why blue light during the day does not lead to depression.

Use warm lighting wherever you can. Ideally everywhere at home but especially in the rooms where you spend your evenings. Second, turn Night Shift on on your iPhones, iPads and Macs. In fact, on Macs, instead of Night Shift, use the excellent Flux, which progressively changes the tint on your screen as you get closer to bedtime. The Amazon Kindle Paperwhite, at least the first-gen, also emits blue light – so around bedtime it may be best to turn off the built in light and read in ambient warm lighting. And have a wind-down routine.

Categories
Data Custody Decentralisation and Neutrality Privacy and Anonymity The Next Computer Wellness when Always-On

The New Middle – Part 2

(Part 1)

Take privacy. There is a near-critical mass of people aware and concerned about how much data faceless companies have about them without their informed consent and how disparate data sets may be combined to manipulate their choices. But many of them aren’t aware of browser-based ad-blockers, their capabilities, how to install and update them. Network-wide ad-blockers such as pi-hole are simply beyond them.

This will change. Privacy-consciousness is going to be an attractive positioning for a business to take in the near future, because the New Middle is searching for them. The email service Protonmail, the browser Brave, the password manager 1Password, the two-factor authentication service Authy (now part of Twilio), the self-hosted personal information manager Nextcloud – these are all the vanguard for well-designed privacy-centric software but I don’t think they have yet marketed to the New Middle, which still reads, watches and listens to the same things the Mainstream does. Apple is the lone company I see having already staked out an unshakeable positioning here. We have recently discussed Mozilla’s opportunity to own this market as well. There is an opportunity for hardware too – imagine a router with firmware that bundles pi-hole, that does not need OpenWRT to be user-installed (side note: Apple’s hardware is already privacy-conscious).

All this is still just one area – privacy. Think about similar New Middle companies in the other areas we listed above. Pervasive anonymity as a service, anonymous-only social network and communities. Attention preservation for you and your kids. ISPs and communications providers publicly committed to net neutrality. Since there are overlaps between many of these, companies can and will compete and win the New Middle in more than one of these areas. Lastly, for large-scale reach, they will need to raise capital. Many of these will do so via the public market. They’ll make for attractive investment opportunities.

(ends)

Categories
Uncategorized

My advice before switching to something new, is to get first clarity about what is broken with your current system and tool. Decide how things needs to be improved and what a better system would look like for you. Then find a tool that will support your new workflow (instead of the other way around).

– Shawn Blanc at The Sweet Setup

This applies to most changes in life and in business.

Incorporating sunk costs into decision-making is a fallacy, but it’s true that switching costs are nonzero, learning curves are real, opportunity cost of time exists.

Categories
Data Custody Decentralisation and Neutrality Privacy and Anonymity Wellness when Always-On

The New Middle – Part 1

The New Middle is a emerging class of people who are beginning to care about the issues in technology we frequently discuss here on the site.

Until recently, when it came to privacy, anonymity, attention fatigue, data custody, net neutrality, intellectual property and patents, even ergonomics and sustainable computing, people either used to be part of The Mainstream or used to be nerds that worked in the technology industry. There were a few outliers for sure, but no overlap.

The Mainstream neither cared for nor understood the implications of such issues on themselves, their community, their society. This indifference would shock, even offend, the nerds. The nerds on their part understood at least some of these as a result of their education. Some of them built software, hardware, service as part of their day job that involved these issues. Consequently, they cared passionately enough to make deliberate choices that would seem very odd to the Mainstream.

As technology in all its forms across all domains has become part of everyday life, over the past twenty years it has gotten extensive coverage in the national press, in print, TV and for the recent past with Netflix and such services, online. The coverage has shifted from being effusive about the transformative possibilities of new technology – “your life will never be the same after you buy a personal computer!” – to a more sober take on the effects on their ill-effects on our wellness and safety.

That has created a vastly more widespread awareness among the Mainstream. It’s affected a sizeable minority of them to think and care enough about how technology affects them, their kids, their friends that they’ve begun looking for ways to gain some control. They are no longer the Mainstream. They are the New Middle.

The New Middle is searching for tools and means to exercise this control at their level of technical competence but are underserved because they haven’t existed as an identifiable segment before.

(Part 2 – an example)