T

trevor1

222 karmaJoined

Comments
265

TL;DR "habitually deliberately visualizing yourself succeeding at goal/subgoal X" is extremely valuable, but also very tarnished. It's probably worth trying out, playing around with, and seeing if you can cut out the bullshit and boot it up properly.

Longer:

The universe is allowed to have tons of people intuitively notice that "visualize yourself doing X" is an obviously winning strategy that typically makes doing X a downhill battle if its possible at all, and so many different people pick it up that you first encounter it in an awful way e.g. in middle/high school you first hear about it but the speaker says, in the same breath, that you should use it to feel more motivated to do your repetitive math homework for ~2 hours a day.

I'm sure people could find all sorts of improvements e.g. an entire field of selfvisualizationmancy that provably helps a lot of people do stuff, but the important thing I've noticed is to simply not skip that critical step. Eliminate ugh fields around self-visualization or take whatever means necessary to prevent ugh fields from forming in your idiosyncratic case (also, social media algorithms could have been measurably increasing user retention by boosting content that places ugh fields in places that increase user retention by decreasing agency/motivation, with or without the devs being aware of this because they are looking at inputs and outputs or maybe just outputs, so this could be a lot more adversarial than you were expecting). Notice the possibility that it might or might not have been a core underlying dynamic in Yudkowsky's old Execute by Default post or Scott Alexander's silly hypothetical talent differential comment without their awareness.

The universe is allowed to give you a brain that so perversely hinges on self-image instead of just taking the action. The brain is a massive kludge of parallel processing spaghetti code and, regardless of whether or not you see yourself as a very social-status-minded person, the modern  human brains was probably heavily wired to gain social status in the ancestral environment, and whatever departures you might have might be tearing down chesterton-schelling fences.

If nothing else, a takeaway from this was that the process of finding the missing piece that changes everything is allowed to be ludicrously hard and complicated, while the missing piece itself is simultaneously allowed to be very simple and easy once you've found it.

Do you know about your cameo in Scott Alexander's novel Unsong? What probability would you have placed on you shifting career paths from academia to more like your Unsong character if, in the 1970s, your younger self and everyone you knew witnessed the sky shattering?

John S Wentworth wrote a 1 minute post considering whether individual innovators cause major discoveries to happen many decades or even centuries earlier than they would have without that one person, or whether they only accelerate the discovery by a few months or years before someone else would have made the advancement. Based on your impact on the philosophy scene in the 1970s and EA's emergence decades later (the counterculture movement is considered by many to have died down during the mid-1970s which notably is around the time when some of your most famous works came out), what does your life indicate about Wentworth's models of innovation, particularly conceptual and philosophical innovation?

What do you think about the current state of introductory philosophy education, with the ancient texts (Greek, kant, etc) being Schelling points that work great in low-trust environments, but still follow the literary traditions of the times? Do you think undergrads and intellectuals outside contemporary philosophy culture (e.g. engineers, historians, anthropologists, etc) would prefer introductory philosophy classes be restructured to produce a more logical foundations to produce innovators and reductionists like your 1970s self and less literary-analysis-minded thinking?

One of the things I found extraordinary about MrBeast videos was how it seems like viewers come for the extreme content on the thumbnail, and then stay to see the details of how exactly people succeed at doing extraordinary things.

On an economic basis, it looks like it scales really well to find ways to do really big things (unambiguously net positive) that you can also bundle into an entertainment product that also inspires other people. I don't know what the median viewer would think of this, but I found it really vivid to see the videos "ramp up" with more and more people getting helped per minute of the video.

Have you thought of other ways you could set up something where the complexity of the situation or the number people helped gets "ramped up" over the course of the video, or where the subjects of the story find increasingly extraordinary ways to overcome increasingly extraordinary challenges? Showing people shine brightly, being their best self and then winning for it, seems to be a common theme for the channel.

One thing I really liked about it was the title: "situational awareness". I think that phrase is very well-put given the situation, and I got pretty good results from it in conversations which were about AI but not Leopold's paper.

I also found "does this stem from the pursuit of situational awareness" or "how can I further improve situational awareness" to be helpful questions to ask myself every now and then, but I haven't been trying this for very long and might get tired of it eventually (or maybe they will become reflexive automatic instincts which stick and activate when I'd want them to activate; we'll see).

I might be mistaken about this, but I thought there was a possibility that Khrushchev and others anticipated that leaders and influential people in the US/USSR and elsewhere in the world would interpret space race victories as a costly signal of strategic space superiority (while simultaneously being less aggressive and less disruptive to diplomacy than developing and testing more directly military-related technology such as Starfish Prime), and separately there was a possibility that this anticipation was a correct prediction about what stakeholders around the world would conclude about the relative power of the US and USSR (including the third world and "allied" countries which often contained hawk and dove factions and regime change etc).

Momentum behind the space race itself had died out by 1975, possibly as part of the trend described in the 2003 paper "The Nuclear Taboo" which argued that a strong norm against nuclear weapon use developed over time; during the Korean War in 1950, American generals were friendly towards the idea of using nuclear weapons to break the stalemate and ultimately decided not to, but were substantially less friendly towards nuclear weapon use by the time the Vietnam War started and since then have only considered it progressively more unthinkable (the early phases of the Ukraine War in 2022, particularly the period leading up to the invasion, might have been an example of backsliding).

At some point in the 90s or the 00s, the "whole of person" concept became popular in the US Natsec community for security clearance matters.

It distinguishes between a surface level vibe from a person, and trying to understand the whole person. The surface level vibe is literally taking the worst of a person and taking it out of context, whereas the whole person concept is making any effort at all to evaluate the person and the odds that they're good to work with and on what areas. Each subject has their own cost-benefit analysis in the context of different work they might do, and more flexible people (e.g. younger) and weirder people will probably have cost-benefit analysis that change somewhat over time.

In environments where evaluators are incompetent, lack the resources needed to evaluate each person, or believe that humans can't be evaluated, then there's a reasonable justification to rule people out without making an effort to optimize.

Otherwise, evaluators should strive to make predictions and minimize the gap between their predictions of whether a subject will cause harm again, and the reality that comes to pass; for example, putting in any effort at all to succeed at distinguishing between individuals causing harm due to mental health, individuals causing harm due to mistakes due to unpreventable ignorance (e.g. the pauseAI movement), mistakes caused by ignorance that should have been preventable, harm caused by malice correctly attributed to the subject, harm caused by someone spoofing the point of origin, or harm caused by a hostile individual, team, or force covertly using SOTA divide-and-conquer tactics to disrupt or sow discord in an entire org, movement, or vulnerable clique; see Conflict vs mistake theory.

Thanks for making a post for this! Coincidentally (probably both causally downstream of something) I had just watched part of the EAG talk and was like "wow, this is surprisingly helpful, I really wish I had access to something like this back when I was in uni, so I could have at least tried to think seriously about plotting a course around the invisible helicopter blades, instead of what I actually did, which was avoiding it all with a ten-foot pole".

I'm pretty glad that it's an 8-minute post now instead of just a ~1-hour video.

My bad- I should have looked into Nvidia more before commenting.

Your model looked like something that people were supposed to try to poke holes in, and I realized midway through my comment that it was actually a minor nitpick + some interesting dynamics rather than a significant flaw (e.g. even if true it only puts a small dent in the OOM focus).

Stock prices represent risk and information asymmetry, not just the P/E ratio.

The big 5 tech companies (google, amazon, microsoft, facebook, apple) primarily do data analysis and software (with apple as a partial exception). That puts each of the five (except apple to some extent, as their thread to hang on is iphone marketing) at the cutting edge of all the things that high-level data analysis is needed for, which is a very diverse game where each of the diverse elements add in a ton of risk (e.g. major hacks, data poisoning, military/geopolitical applications, lighting-quick historically unprecedented corporate espionage strategies, etc).

The big 5 are more like the wild west, everything that's happening is historically unprecedented and they could easily become the big 4, since a major event e.g. a big data leak could cause a staff exodus or a software exodus that allows the others to subsume most of their market share (imagine how LLMs affected Google's moat for search, except LLMs are just one example of historical unprecedence (that EA happens to focus way closer on, relative to other advancements, than wall street and DC), and most of the big 5 companies are vulnerable in ways as brutal and historically unprecedented as the emergence of LLMs).

Nvidia, on the other hand, is exclusively hardware and has a very strong moat (obviously semiconductor supply chains are a big deal here). This reduces risk premiums substantially, and I think it's reasonable likely that they would even be substantially lower risk per dollar than holding stock diversified between all 5 of the big 5 tech companies combined; I think the big 5 set a precedent that the companies making up the big leagues are each very high risk including in aggregate and Nvidia's unusual degree of stability, while also emerging on the bigleagues stage without diversifying or getting great access to secure data, might potentially shatter the high-risk bigtech company investment paradigm. I think this could cause people's p/e ratio for Nvidia to maybe be twice or even three times higher than it should, if they depend heavily on comparing Nvidia specifically to google, amazon, facebook, microsoft, and apple. This is also a qualitative risk that can also spiral into other effects e.g. a qualitatively different kind of bubble risk than what we've seen from the big 5 over the last ~15 years of the post-2008 paradigm where data analysis is important and respected.

tl;dr Nvidia's stable hardware base might make comparisons to the 5 similarly-sized tech companies unhelpful, as those companies probably have risk premiums that are much higher and more difficult to calculate for investors.

Ah, I see; for years I've been pretty pessimistic about the ability of people to fool systems (namely voice-only lie detectors facilitated by large numbers of retroactively-labelled audio recordings of honest and dishonest statements in the natural environments of different kinds of people) but now that I've read more about humans genetic diversity, that might have been typical mind fallacy on my part; people in the top 1% of charisma and body-language self-control tend to be the ones who originally ended up in high-performance and high-stakes environments as they formed (or forming around then, just as innovative institutions form around high-intelligence and high-output folk).

I can definitely see the best data coming from a small fraction of the human body's outputs such as pupil dilation; most of the body's outputs should yield bayesian updates but that doesn't change the fact that some sources will be wildly more consistent and reliable than others.

Load more