without this funding
The original post by the two authors talked about getting effective monitoring/surveillance.
The status quo is what the planet has with the current funding, etc. If you want something better, then as you inferred, it's going to take additional changes and resources.
In the Less Wrong sequences, there are essays about utilions and warm fuzzies. In the healthcare world, that distinction is always present.
If I spend $80,000 hiring someone to slog through medical records looking for a pandemic, then I have given up the chance to spend $80,000 on a nurse to get patients out of ambulances and into a hospital bed faster. The former cannot be billed to Medicare. The latter can be.
To use an AI analogy, if a programmer makes a reward function that rewards the latter and not the former, the programmer doesn't get to complain that it was a surprise while dying of COVID-19 or being turned into a paperclip.
You're correct except that we receive money from other types of organizations too, including non-profit organizations who give money in the form of grants (hi there, American Heart Association!). You'll see why later in this comment.
The firm has international ambitions but it is an American company with an office in California.
>buy data at a loss?
Not quite that cheap. You can think of it as 'Insert coins. Get a table of data about people in trouble.' More specifically, we charge people for each data source they want us to look at.
The most common type of permission ("HIPAA Business Associates Agreement") doesn't let us share information with other people. I'm fairly certain FirstWatch requires additional legalese before showing 1 organization's bits to another organization.
For more details about payments and how things work, you can look at pages 6 and 7 of "FirstWatch Agreement 2011" in a customer's agenda item.
Since the "revenue maximization" part caused trouble, I'll explain further. If you live in the USA, you can stop here.
Imagine the following chain of events:
There's going to be a wait time measured in minutes or hours before each step from step 4-8.
Even if an organization wanted to watch for a disease outbreak, make sure to give people the right medical treatment, and let employees have a good life, it has to watch out for $.
Each organization will send a bill to you, your health insurance company, the people who collect your taxes, or some combination of the above. And the bill must be backed up by documentation.
Example:
Step 4:
Step 5:
Step 6: Some firefighters employed by "Fire Alarm Boxes, Inc." or "Fire Alarm Boxes in South-Mega Region Joint Powers Authority"
Step 7: Ambulance crew employed by either "Almost Bankrupt Ambulance Company, Inc.", "Gigantic Group of Ambulances, Inc.", or "City of Little Town". City of Little Town is next to City of Big Name Here.
Step 8:
If you're wondering when the Public Health Department gets involved, the answer is "never" unless you write a law that says "thou shalt report cases of X or else."
I'm quite fascinated by this post because I work for a company that spent a chunk of its startup years trying to implement the "Early Detection Center" part using 911 calls and call-related data.
From listening to the early folks, I got the impression that "terrorism! biosurveillance!" made for nice press conferences. But, people are mostly interested only if we help their highly-visible and much more obvious key performance indicators improve (e.g. increasing revenue). Even after getting certified (?) by the US Department of Homeland Security as a syndromic surveillance system, we continue to spend almost all our time on everything everything but syndromic surveillance.
As a publicly visible example of the company's change, you can go to https://firstwatch.net/what-we-do/ and see that part of our business is only 1 tile ("Public Health") out of 5.
Maybe someone can learn from our experiences and find a way to persuade people to take it seriously in the future.
>what do you want?
Mostly wanted to describe what has been tried before so that maybe someone else can try something smarter in the future. There's so many misaligned incentives and problems that it's hard to know where to start and it's nice to have a place to put these thoughts down in a productive manner.
I guess I was looking for emotional support and got it; thank you. I imagine that my emotions have similarities to that of people who work on friendly AI.
There's not much else to say that isn't said better elsewhere (woes of American healthcare, coordination problems, the LW sequences, the way brains think of other people, utilions, heroic responsibility, ambiguous delegation of duties, dissemination and usage of knowledge, etc.).
In closing, a passage from chapter 109 of HPMOR feels appropriate.