Categories
Wellness when Always-On

Asana

In the early days of the task- and project-management software Asana, I had been interested in a number of public statements made by the co-founder Dustin Moskovitz, also a Facebook co-founder, on calmness and deliberation while working, and how that influenced their product decisions. That resonated with me enough to try it out in the business unit I ran in 2011-12.

From then to now the company has mostly flown under the radar. Now it’s getting some press as it files for its IPO. One of those articles once again covered their company culture:

Internally, Moskovitz and Rosenstein took their time crafting their idealized corporate culture. They interviewed experts, brought in executive coaches and tapped a diversity-and-inclusion officer and a “head of people” over the years to get it just right. Moskovitz, who spent years studying Buddhism and leadership strategies, set up a company org chart with himself at the bottom, to represent the trunk of the company tree. Asana eschews traditional executive titles and instead makes people heads of a particular topic or business outcome. Moskovitz says it’s only right that a company focused on better teamwork invest in it as well: “We want to practice what we preach, figure out what’s best and export that.”

Along the way, Moskovitz leveled up as a leader, too. Known for his temper as a twenty-something at Facebook, he says he has learned not to agonize so much over setbacks. He invokes a saying from mindfulness expert Jon Kabat-Zinn: “You can’t stop the waves from coming, but you can learn to surf.”

Categories
Wellness when Always-On

Burnout and distraction

From the venture capitalist Fred Wilson, writing about something I have experienced first-hand myself:

I’ve gone through a few periods of burnout in my career and it was usually brought on by a string of painful failures. I’ve watched others navigate that similarly. If you are heavily invested in something that doesn’t work, it hurts. And the more the investment is of yourself, your time, your enthusiasm, the worse it is.

I’ve gotten out of these periods of burnout by turning my attention to something else. Crypto was helpful for me back in 2013 and 2014 when I was going through one of those periods. It was something new that I could throw myself at, that was different, and that was working.

The first time I experienced burnout I spent time laying the groundwork for an India-specific Demand Side Platform for ads. This is before my current thinking on privacy and anonymity evolved to what it is, and in retrospect I’m glad I never built upon that work. The second time, with burnout combined with poor mental health, I spent time on the Someone To Talk To chat application that we recently discussed.

Categories
The Dark Forest of the Internet The Next Computer Wellness when Always-On

Mental health tech cannot be like regular tech – Part 4

(Part 3 – My personal experience with seeking help with my mental health)

I’d called it simply Someone To Talk To.

Someone To Talk To – a chatbot

Background:
Several – most? – people with anxiety or mild depression would benefit tremendously from just having someone to talk to. Someone who is receptive and non-judgemental, and doesn’t have any other relationship with the person. It’d be ideal if people had someone like this to talk to in person, but it is impractical: not just because it involves travel for one or both people but also that it requires scheduling and therefore a set time, and that people may feel the need to talk at any point in the day. The mobile phone, a deeply personal device available 24×7, is ideal.

Now, this need goes far beyond mood-logging apps that ask for a rating or emoticon to describe one’s mood. Not only do people’s emotions vary significantly during the day (especially those that are anxious or mildly depressed), not only are they more likely than not to rate their overall day as negative, thereby further feeling down at their own constant negative rating of their days, but more importantly that their needs go beyond such a one-point (or even multi-point) rating. They need an outlet for thoughts and anxieties and fears, to put in words.

Another approach to this is journaling apps, including visually beautiful ones like Day One. Journaling apps take on many types, inluding 5 Minute Journals and Gratitude Journal. But the prospect of filling up a blank screenful, especially day after day, is too often overwhelming for someone who has low emotional bandwidth in the first place.

Here are the characteristics of what I think will make a good Someone To Talk To:

It will be conversational. It’ll feel like a chat (and in fact will be one, with pictures and links sent both ways just like in a regular chat) instead of a set of screens and buttons. The bot will be able to organize what one says, with context, into a journal of sorts for the person’s reference any time later. The person will be able to share with the bot not only text, but also pictures and other media. The bot should be able to respond with at least some context – gentle encouragement or reassurance – but must take great care to not overdo it or sound artificial. Not responding is better than responding like, well, a bot. The bot will also learn about the person over time – what the person seems to like and not, and whether that changes over time, specific people, persons, places that the person refers to and their relationship to the person. The person may give them appellations that are not their real names/descriptions, and that is by choice. The bot will know this.

It will be empathetic and sensitive. It will be designed knowing that a regimental approach of asking for a mood update, or a diary entry at the same time every day, as many apps seem to do, is counter-productive and causes more stress than it relieves. It will be designed knowing that on occasion a notification or picture or video or piece of music can trigger anxiety/sadness/distress in people and they may not even recognise it as such, much less know why. Finally, it will know when the person is looking for a response from it, and when it simply needs to ‘listen’, providing occasional acknowledgement of its ‘presence’.

It will be gently intelligent. While it will often initiate conversation, it won’t ask with every interaction if it felt right, or if the person liked it or not. Being the one expected to be ‘in charge’ of the bot-human relationship can feel challenging. If the bot is designed to be a stand-in for a human companion, it must do better than ask for feedback often (wouldn’t it be stressful to have a friend do so?) Just like making decisions, being made to pass judgement also brings pressure. It will never expect anything from the person it engages with, such as a response in a certain amount of time, or at all. It will also be able to gauge improvement or decline in mood over time and adjust accordingly.

In addition to being empathetic, sensitive and intelligent, it will also be realistic. It will never make empty promises about things always getting

As things turned out, I returned to my day job and never actually worked on this beyond a few simple test versions with a couple of other collaborators. But this, still, is exactly how I’d go about bridging the gap between dealing with mild mental health issues on one’s own, and full-fledged therapy.

(ends)

Categories
The Dark Forest of the Internet The Next Computer Wellness when Always-On

Mental health tech cannot be like regular tech – Part 3

(Part 2 – What happens where you run a mental health service as a Valley-type startup)

Several years ago, I had had first-hand experience with poor mental health and sought help via in-person therapy and medication. During that time, I realized the following: 

  • mental health help is binary: either there’s nothing or there are sessions with a mental health professional
  • the financial cost puts it out of the reach of most people
  • the time investment makes it difficult to balance work, home and this

At the same time, I had observed how my health had deteriorated over time, beginning with mild depressive symptoms with a decrease in drive and discipline. Today, I know that in others depression first manifested as increased anxiety. At that point, you know something’s not right, but doesn’t seem anywhere close to needing to see a psychotherapist. So you plod along until things begin to slide faster. By the time you seek help or someone does on your behalf, you’ve suffered quite a bit. 

Therefore, I understood that

  • people needed – still need – something handy that didn’t warrant full-fledged medical care but provided some minimal level of support for those with mild symptoms. There are now several studies showing that a plurality of the population, especially urban, have poor mental health
  • because people would likely end up being somewhat dependent on it, it needed to be low-overhead enough to be offered free. 
  • and counterintuitively, it needed to be low-tech enough that its limits would be clear right away so that it would not disappoint later.

(Part 4 – My 2017 manifesto for such a service)

Categories
The Dark Forest of the Internet The Next Computer Wellness when Always-On

Mental health tech cannot be like regular tech – Part 2

(Part 1 – the NYT investigates a mental health tech service for practices that aren’t in patients’ best interests)

Continuing our conversation about the NYT’s investigation into the practices at Talkspace, the remote mental health therapy app.

The problem stems from running a healthcare service like this as a business. Worse, a startup. Having spent the vast majority of my professional life at startups, I am all too familiar with the incentives to scale, to perform according to metrics, to employ ‘growth hacks’. When a healthcare startup employs these, the effects are much worse than a game or a social media app.

[the company] has questionable marketing practices and regards treatment transcripts as another data resource to be mined. Their accounts suggest that the needs of a venture capital-backed start-up to grow quickly can sometimes be in conflict with the core values of professional therapy, including strict confidentiality and patient welfare.

In 2015 and 2016, according to four former employees, the company sought to improve its ratings: It asked workers to write positive reviews. One employee said that Talkspace’s head of marketing at the time asked him to compile 100 fake reviews in a Google spreadsheet, so that employees could submit them to app stores.

When convenient, the company spins itself as a healthcare provider: “users can’t delete their transcripts, for example, because they are considered medical records.”, but those transcripts are used by the company for customer engagement and retention:

[A therapist on the platform] said that after she provided a client with links to therapy resources outside of Talkspace, a company representative contacted her, saying she should seek to keep her clients inside the app… “I was like, ‘How do you know I did that?’” Ms. Brennan said. “They said it was private, but it wasn’t.”

Finally, the disconnect between offering a private healthcare service and the exigencies of operating a startup is clear in the company’s public statements:

On Nov. 9, 2016, the morning after the election of Donald Trump, Mr. Frank wrote on Twitter: “Long night in NYC. Woke up this morning to record sales.” The Trump election tweets are examples of the sometimes unfiltered social media presence of Mr. Frank and Talkspace — an irreverence familiar among start-ups but unusual among organizations devoted to mental health care.

In 2016, a man named Ross complained on Twitter that the company’s subway ads “were designed to trigger you into needing their services.” Talkspace’s official Twitter account responded, “Ads for food make people hungry, right?” and added, “I get what you’re saying, Ross, but medical professionals need people to buy things.”

The problems of misincentives of private healthcare are well known and well debated, though nowhere close to being resolved. And startups get a lot more attention than a more traditional healthcare provider would. And Talkspace may have begun with noble ambitions. Regardless, what it has built isn’t anywhere what I think mental health care tech should look like.

How it approaches conflict between what is good for itself versus what is good for its customers – patients – diminishes trust. When it come to data privacy, to marketing, to its incentives for therapists, its responses indicate that it will choose its financial health over people’s mental health. Unfortunately, it’s likely similarly funded venture-backed tech companies in healthcare have similar conflicts and pressure.

(Part 3 – How I’d approached a simple mental-health chat service)

Categories
The Dark Forest of the Internet The Next Computer Wellness when Always-On

Mental health tech cannot be like regular tech – Part 1

Earlier in August, the NYT published a longform investigation into the data and marketing practices of the mental health therapy provider Talkspace. It hit home for me.

So many of the NYT’s discoveries dismayed, even horrified me. I have sought mental health therapy myself several years ago, and about three years ago contemplated working on a different approach for a text-based service for people with anxiety and mild depression. We will talk about the specifics some time later. However, I thought in detail about, and spoke to mental health professionals about the technology, the ethics, the right audience, among other such issues. The way I thought about all of these was so very different from Talkspace.

The first is the way the service is marketed to potential customers and to therapists:

Talkspace is advertised to users as unlimited, “24/7” messaging therapy. “Your therapist will see your messages and respond to you throughout the day,” the company says. Therapists get a different pitch: “Set your business hours, and check in on your clients daily, five days per week.”’

This immediately reminded me of the worst of Groupon, which pitched to users that they’d never have to pay full price for anything again, while pitching to businesses that a single one-time mass discount would convert to loyal users willing to pay, well, full-price.

But Talkspace went beyond, adding this:

Talkspace introduced a new feature: a button that users could press after sending a message that required the therapist to respond within a certain time frame. If the therapists don’t respond in time, their pay can be docked.

Some therapists on the platform were alarmed, in part because the function required them to work on demand, rather than on their own schedule. More significantly, they asked: Is it harmful to give clients with anxiety and boundary issues a button to press for immediate gratification?

First, this reminded me of the worst of Uber. I have been told my more than one driver that they could only turn down a limited number of rider assignments a day, and had to be online at least a certain number of hours a day. After the rollout of the feature where drivers are assigned their next ride towards the end of the ongoing one, those drivers said they couldn’t even take time out to visit the restroom, leave alone lunch and snack breaks, because they also had to reach the next pickup in the time that the Uber app had estimated for the waiting rider.

Troubling as this is, these constraints are worse in the context of mental health because they introduce added anxiety for both the therapist and the patient, the former needing to be equanimous and the latter already suffering from some anxiety/depression.

(Part 2 – more about conflicts)

Categories
The Next Computer Wellness when Always-On

Blue light and depression

There’s evidence that warm lighting aids sleep in humans. Now there’s a connection between the other kind of lighting, ‘blue light’, and depression. Blue light is the kind of light our computer and mobile screens commonly emit.

The researchers exposed lab mice to blue light for two hours every night for three weeks and then watched the mice closely to determine if the light exposure was having any discernable impact on their behavior. The researchers found that initially, the mice showed no signs of change at all. But after three weeks, the mice began to work less hard for a sugar reward and tried less hard to escape when presented with the possibility—both considered to be signs of depression in mice. The researchers also found that the depressive symptoms continued for up to three weeks, even after discontinuation of night light exposure.

And there’s some evidence of the cause

They found a specific kind of light receptor in the mouse retina that led to brain areas associated with mood: the nucleus accumbens and the dorsal perihabenular nucleus. When the researchers disconnected them, the mice did not become depressive due to exposure to blue light at night. The researchers also found that the pathways became much more active when processing blue light at night versus daytime—which, they suggest, explains why blue light during the day does not lead to depression.

Use warm lighting wherever you can. Ideally everywhere at home but especially in the rooms where you spend your evenings. Second, turn Night Shift on on your iPhones, iPads and Macs. In fact, on Macs, instead of Night Shift, use the excellent Flux, which progressively changes the tint on your screen as you get closer to bedtime. The Amazon Kindle Paperwhite, at least the first-gen, also emits blue light – so around bedtime it may be best to turn off the built in light and read in ambient warm lighting. And have a wind-down routine.

Categories
Data Custody Decentralisation and Neutrality Privacy and Anonymity The Next Computer Wellness when Always-On

The New Middle – Part 2

(Part 1)

Take privacy. There is a near-critical mass of people aware and concerned about how much data faceless companies have about them without their informed consent and how disparate data sets may be combined to manipulate their choices. But many of them aren’t aware of browser-based ad-blockers, their capabilities, how to install and update them. Network-wide ad-blockers such as pi-hole are simply beyond them.

This will change. Privacy-consciousness is going to be an attractive positioning for a business to take in the near future, because the New Middle is searching for them. The email service Protonmail, the browser Brave, the password manager 1Password, the two-factor authentication service Authy (now part of Twilio), the self-hosted personal information manager Nextcloud – these are all the vanguard for well-designed privacy-centric software but I don’t think they have yet marketed to the New Middle, which still reads, watches and listens to the same things the Mainstream does. Apple is the lone company I see having already staked out an unshakeable positioning here. We have recently discussed Mozilla’s opportunity to own this market as well. There is an opportunity for hardware too – imagine a router with firmware that bundles pi-hole, that does not need OpenWRT to be user-installed (side note: Apple’s hardware is already privacy-conscious).

All this is still just one area – privacy. Think about similar New Middle companies in the other areas we listed above. Pervasive anonymity as a service, anonymous-only social network and communities. Attention preservation for you and your kids. ISPs and communications providers publicly committed to net neutrality. Since there are overlaps between many of these, companies can and will compete and win the New Middle in more than one of these areas. Lastly, for large-scale reach, they will need to raise capital. Many of these will do so via the public market. They’ll make for attractive investment opportunities.

(ends)

Categories
Data Custody Decentralisation and Neutrality Privacy and Anonymity Wellness when Always-On

The New Middle – Part 1

The New Middle is a emerging class of people who are beginning to care about the issues in technology we frequently discuss here on the site.

Until recently, when it came to privacy, anonymity, attention fatigue, data custody, net neutrality, intellectual property and patents, even ergonomics and sustainable computing, people either used to be part of The Mainstream or used to be nerds that worked in the technology industry. There were a few outliers for sure, but no overlap.

The Mainstream neither cared for nor understood the implications of such issues on themselves, their community, their society. This indifference would shock, even offend, the nerds. The nerds on their part understood at least some of these as a result of their education. Some of them built software, hardware, service as part of their day job that involved these issues. Consequently, they cared passionately enough to make deliberate choices that would seem very odd to the Mainstream.

As technology in all its forms across all domains has become part of everyday life, over the past twenty years it has gotten extensive coverage in the national press, in print, TV and for the recent past with Netflix and such services, online. The coverage has shifted from being effusive about the transformative possibilities of new technology – “your life will never be the same after you buy a personal computer!” – to a more sober take on the effects on their ill-effects on our wellness and safety.

That has created a vastly more widespread awareness among the Mainstream. It’s affected a sizeable minority of them to think and care enough about how technology affects them, their kids, their friends that they’ve begun looking for ways to gain some control. They are no longer the Mainstream. They are the New Middle.

The New Middle is searching for tools and means to exercise this control at their level of technical competence but are underserved because they haven’t existed as an identifiable segment before.

(Part 2 – an example)

Categories
Wellness when Always-On

What is the opposite of not sick?

We have talked about burnout before (“Work and Success“, “Burnout as a Workplace Injury“), something I experienced a few years ago, and which I wouldn’t wish on anyone. This article describes the physiological response to the conditions that typically lead to burnout:

.. the body is handling all kinds of stressors through this allostatic process, the sum of which is also called allostatic load. All things going on in your life right now contribute to this load, including the stress from your work, finances, relationships, sickness, working out, and so on. The most important way that your body responds to these stresses is through the so-called HPA axis (hypothalamic–pituitary–adrenal axis) which results in the secretion of adrenaline and cortisol, the major stress hormones.

These hormones give an immediate boost to the system, preparing it for (intense) action and fight-flight behavior. In other words, when the HPA-axis kicks in you’re running on survival energy. This is great when you’re in acute danger, like escaping from threat or fighting of attackers. But it is unnecessary during normal daily living. And it can even become damaging when you’re functioning from your survival energy supply 24/7.

The rest of the article describes the progressive effect on the body of continuing to functions through such conditions.

The most important piece I think is at the end. The writer poses a simple question:

I want to also point to the necessity of defining the opposite of burnout. Sadly, in many medical and scientific literature pathology prevails. We know very well what sickness and disease look like, but what about the opposite? What’s it like to not be burning out?

This opens a whole rabbit-hole, but I think it is essential to think about this for oneself. Both our formal education and our societal norms condition us to think in terms of achievements. At no point does is one encouraged to consider what a desired end state looks like.