Claude Code + Theater Scraper

Am playing around with AI coding. Which is fun and frustrating and very educational.

Am going to post some of my attempts and observations.

I use Harper's excellent LLM Codegen Workflow (see also this followup). For this experiment, I used Claude Code. I did this in late Feb 2025 (sorry for the delay in writing it up).

My basic project was to write a scraper that would get theater listings from a bunch of London theaters and send me an email daily with new listings. I thought this was a good experiment because it was a relatively easy project in a domain I know well enough to do myself.

Here is the Spec that I came up with in a back and forth with ChatGPT 4o. It really wanted to expand the scope and wanted to use selenium (to simulate web browser requests) in spite of it not being needed. 

I then asked for a specific set of prompts and a todo list for those prompts (I think this was with o1, but it could have been 03-mini). The result was pretty good and I felt ready to go.

I was trying to minimally intervene in the code generation process. 

Today the results may be different (I have since borrowed a better Claude.md (thanks Jessie Vincent) and both Claude Code and Claude itself have gotten better) but the first attempt was a disaster. Claude Code kept trying to do all of the prompts at once but more importantly, it seemed completely lost in terms of actually doing the work of figuring out how to get the right information from the web pages. I spent a bunch of time and Claude $ but eventually scrapped that entirely and started fresh with one big change: I manually went out and downloaded every single page I wanted to be able to scrape and put them in a folder (tests/fixtures). 

That one change really made a big difference. Claude Code still wanted to do everything all at once, but now I could push it towards getting correct answers for what to look for in the html and what the outputs of its scraping of the fixtures should be. The result is something that is useful and seems to be working.

My big takeaways were: 

  1. be prepared to throw everything out (also, learn & incorporate git);
  2. make the spec and the prompts simple -- no, simpler than that; 
  3. anything you would want to have at your disposal when coding, make sure Claude Code has and knows it has; 
  4. stop Claude Code often to point out obvious things -- "that is out of scope for this step", "mocking the test result doesn't mean you passed the test", "yes the tests are important, you should still do the tests and not move on if some are failing.";
  5. pay attention to Claude Code and intervene;
  6. Claude Code will do better in areas that you know because you'll be able to tell when it is not doing good stuff and stop/redirect it;
  7. this type of coding is a bit like social media scrolling in terms of dopamine slot machine (someone at Coding AI said this and I agree but forgot who said it)


Harvey Anderson

Am in SF/Bay to mourn the death of Harvey Anderson. I'm devastated. Harvey was a friend and someone I admired tremendously. He was a giant in shaping what it means to be a tech General Counsel but also in every other aspect of his life. 

He was always wise and generous with his time and with the way his mind was open and curious to many possibilities. He could drive a hard bargain -- see eg the amazing amount of resources he brought to Mozilla through the Google and other search deals -- but he was also kind and supportive in so many ways. 

I had known ~of~ him for a long time when I asked whether he would mentor me as a young and inexperienced GC at Twitter. While he rejected the label, he was an outstanding mentor, always reminding me to think about the bigger picture and gently pushing me to focus on areas that were important. He also had a wonderful sense of humor and a smile that invited you in, and reminded you of the insignificance of whatever decision you were asking about in the context of the more important things in life. 

Harvey was also a model for me in thinking about family. His is wonderfully overflowing. I expect they know how much he loved them and how much joy they brought him, but it was evident from the outside too. My thoughts are with each of them as they mourn.

Two obituaries are here:

Piedmont Exedra 

Marquette

Rest in peace Harvey, I am grateful to have known you.

Notebook LM

After listening to Hard Fork this week, am playing around with NotebookLM, Google's new AI tool designed around "sources" uploaded by users and developed in collaboration with Steven Johnson.* Am excited that Google Labs is back and also, I agree with Casey Newton that NotebookLM is very "old school google": geeky, experimental and niche.

Listening to the podcast encouraged me to play around a bit with NotebookLM so here are some results. Sadly I think that sharing the notebooks themselves is limited to specific signed in accounts, so am provided a few podcasts and notes in Google docs. LMK if that's not true and I'll link to the complete notebooks.

First, I was about to visit the Churchill War Rooms, the underground bunkers from which UK military command worked during the World War II. As a sidenote, they are really interesting. Especially the Map Room, which reminded me a lot of the way the Situation Room in the White House is a data collection hub in addition to a place for national security meetings. They also have a recreation of a July 9, 1944 Chiefs of Staff meeting debating Churchill's suggestion to consider bombing small German towns in retaliation for German bombing of civilian targets in London. That recreation is interesting both for the substance they discuss and also because it is very similar inn form to thousands of meetings I have been in, from a product team trying to decide whether to implement Sergey Brin's latest feature idea, to the Blueprint for an AI Bill of Rights team figuring out whether to make a West Wing suggested change in the document. Seeing that recreated was great.

Anyhow, before going to the War Rooms I printed to PDF three Wikipedia articles about Churchill, London in World War II and the Blitz and plugged them into NotebookLM. The resulting notebook was interesting and somewhat useful (here's some output from the notebook and podcast it generated). The podcast in particular was less of a primer than an bit of additional colour, though when I asked specifically for the notebook to tell me what I should know before visiting, it did a good job of summarizing some basic facts (see the end of the output document). I tried similar things for an upcoming Berlin visit including a set of web pages that focused on the history of Hitler's rise to power and a separate group focused on the airlift, the wall and the cold war in Berlin. These were also worth the time and interesting.

Then I split this blog up into 20 pdfs and uploaded them. That project was less successful. The podcast is cringeworthy and the notes are of varying quality.* Perhaps this is unsurprising given the really diverse set of posts I have up here. Seems that NotebookLM does better with documents that are thematically aligned or different descriptions of a single phenomenon. On the other hand, I liked that NotebookLM is not shy in saying when a source does not answer whatever question I asked (see the end of the notes doc).

In all, I enjoy these specific purpose built AI tools. I'm glad for the whimsical podcasts being added to a relatively dry product, even though I'm not sure they have a purpose. I'm thrilled that Google Labs is back and is trying stuff (I hadn't noticed before now). I'm not confident this is a thing that I'll keep using beyond the novelty but I'll keep playing around with it and seeing what sticks.



* !!! Really excited for this because I'm a huge fan of his work. If you haven't already read his books, I recommend either Where Good Ideas Come From: The Natural History of Innovation or Emergence: The Connected Lives of Ants, Brains, Cities, and Software as starting points.

** I tried it again from my non-Google Workspace account and got a very different set of results. I think these are substantially better, though still contain some straightforward errors. It could be that the NotebookLM running for Google Workspace accounts is different than the one running on regular Google accounts, so your mileage may vary.

Google Timeline to Countries and Dates

I recently needed a list of all of the countries I had been to and the dates I was in each. Naturally I thought of my Google Timeline (formerly "location history") as a way to do it. Google Timeline is a data store of all the places you have been over time. It is extremely detailed and, at least for me, seems relatively complete. To view yours, go to your timeline.

To get your timeline in a form you can manipulate, you can use Google Takeout, Google's data portability service (big kudos to Fitz and the whole Google Takeout team). My file contained over 2.8 million locations, so the first thing I did was used geopy to throw out any locations that weren't at least 50 miles apart (see code). That left ~12,000 entries. For each of the 12,000 entries, I rounded them down to reduce calls, then used geopy to reverse geocode (look up the street address based on the latitude and longitude), threw out everything but the country, and outputted any change with a date (see code).

This was somewhat similar to a project I did more than six years ago, though Google had changed the format of its timeline file, so I needed to rewrite it. It should be pretty easy to also produce a country chart, but I haven't done that yet.

I continue to believe that data portability will not take off and be demanded by users until there exists useful things to do with the data. Hopefully scripts like these can help contribute to that.


Biden Admin Artificial Intelligence Executive Order & OMB Guidance: Some thoughts & a calendar

Take what I say here with a grain of salt because my old team worked on this (and I worked on earlier iterations and the Blueprint for an AI Bill of Rights). 

Now that I've had a chance to read the U.S. AI Executive Order (here's a version of the order that prints in fewer pagesand the accompanying -- and equally important -- Office of Management and Budget (OMB) Draft AI Guidance, I wanted to share a couple of thoughts and a calendar to help folks who are tracking the various deliverables assigned in the AI Order and the OMB AI Guidance.

President Biden speaking at the AI Order signing ceremony.

Much has been said about the size of AI Order but what struck me about it was its willingness to contain tensions. It has provisions dealing with concerns about AGI and existential threats as well as the current and historical harms from AI that are impacting people now. It has numerous specific provisions that are more national security focused and also many that are more typical of domestic policy and equity. It has a number of provisions that may impose burdens on new entrants to the AI space but also provisions that would radically lower barriers to entry. It addresses numerous AI harms but also contains provisions that recognize and  seek to catalyze its benefits. 

All of this speaks to the nuanced understanding of AI that exists in the federal government from President Biden to the various folks working day to day on getting the Order together. I believe that's a product of greater tech fluency throughout the White House and federal agencies and the way the White House has prioritized AI policy.

Another striking thing about the AI Order is the sheer volume of deliverables it launches. I'm going to want to see what becomes of them, so I made an AI Order and OMB AI Guidance Calendar (and in iCal). It might be helpful to you too. You can import it into your Google or iCal calendar. Please let me know if I got a date wrong or missed one.

The calendar only contains entries tied to dates and contains one hundred entries. There were a lot more actions that eitherstarted immediately or were not associated with a date by which they had to be done. 

In creating the calendar, it was also striking that the AI Order requires some deliverables that are quite distant from today. I'm generally pretty skeptical of requirements far in the future for the reasons Jen Pahlka describes so well in her great book Recoding America.

I will add more entries from the OMB AI Guidance once it is finalized but for now the calendar contains the most important one: December 5, 2023, the date that comments are due. There is a helpful guide to commenting on the Guidance as well as a Regulations.gov page for submitting comments. Please consider giving it a read and submitting comments.

I'm excited that the AI Order and draft OMB AI Guidance are out in the world and look forward to hearing what folks think about them.

A good group of the Blueprint for an AI Bill of Rights team posing together at the AI Order signing ceremony.


My Time in The Biden-Harris Administration

I recently (ok, not that recently) left the Biden-Harris Administration after serving in a variety of ways over the last few years. Initially I was part of the transition team. Then, after a break, I became Deputy Assistant to the President and Principal Deputy US Chief Technology Officer (CTO), in the Office of Science and Technology Policy through the wonderful National Science Foundation Technology, Innovation and Partnerships Directorate. I'm grateful for the time I had in the administration, the phenomenal people I got to work with, and the impact we had together.

The Eisenhower Executive Office Building hallway leading to the Navy Steps down to the White House. One of my favorite views in the EEOB. In the morning the light as you walk towards those doors is blinding.

Joining the small but mighty CTO team in the fall of 2021 was quite different from when I held a similar role in the Obama-Biden Administration. For one thing, I was joining at the beginning of an administration, not the end. For another, President Biden had learned a number of lessons during his long career and time as Vice President that led his administration to keep a rigorous focus on the priorities President Biden had outlined on the campaign and to prioritize effective implementation of policy initiatives at the highest level. Finally, from a tech perspective, the government in 2021 was different than in 2014. 

The first US CTOs extended our government’s capacity to use technology effectively and brought tech expertise to White House policy making. In 2009, few agencies used modern technology fluently. Many career techie civil servants were pushing for change but were met with the various forms of resistance as Jen Pahlka details in her exceptional book, Recoding America. The first three US CTOs, Aneesh Chopra, Todd Park, Megan Smith and their teams were successful in a wide array of policy areas. They opened data sets for transparency and innovation, championed expanding digital medical records, helped increase access to broadband, brought more tech expertise to policy tables, and much more.

They also made significant strides, working with many others at the White House and across government agencies, in building the capacity of the federal government to deliver modern technology. That included helping to create the US Digital Service, the Tech Transformation Service, the Presidential Innovation Fellows, and supporting the creation of agency digital services (e.g. the Defense Digital Service, Health and Human Services Digital Service, etc) and the transformative work of the federal and agency Chief Information Officers (CIOs) and Chief Data Officers (CDOs). 

One of the most exciting things about being back in government in 2021 was how different it was from 2009. In 2021, there was significant tech expertise in all of the White House policy counsels, from the Domestic Policy Council, to both the National Economic Council and National Security Council. Even without counting the excellent CTO Team, the Office of Science and Technology Policy had significant technical expertise in its other divisions – including Alondra Nelson’s incredible Science and Society division. Both Alondra and Arati Prabhakar, two of the three Office of Science and Technology Directors during my tenure, were highly technically sophisticated. In addition, leaders at agencies across the spectrum were increasing technical fluency at all levels.

Senior Staff at the Office of Science and Technology Policy circa May 2022.

Furthermore, the centralized tech experts at the US Digital Service, Federal CIO, and GSA were – and still are – thriving under strong leadership of Mina Hsiang, Clare Martorana, and Robin Carnahan. Many agencies have digital services groups of their own, while others have bulked up their CIO, CTO or other offices to more aggressively pursue strong digital service delivery. And, if you looked into the teams working on the biggest problems, such as climate change or COVID-19, you’d find strong tech experts.

I love walking meetings. This is staged for the White House photographer. In real ones I wouldn't be wearing a suit. With me are two wonderful members of the CTO team, Ismael Hussein and April Chen. 

While the government environment was changing, the CTO team’s core mission remained the same. Our priorities were to build tech capacity and advise on policy, all in the service of delivering on the President’s agenda and delivering results for the American people. The CTO team still works hard on establishing good tech policy, including in the areas of artificial intelligence, digital assets (cryptocurrency), privacy, platform regulation, advanced air mobility, web accessibility, broadband access, wireless spectrum policy and in many other areas. Also, under the leadership of Denice Ross and now Dominique Duval-Diop in the role of U.S. Chief Data Scientist, we had the privilege of continuing to support federal data science expertise, including in the development of equitable data that can be used to ensure government benefits and services reach those who  need them the most and that data science is  a  key  part of   policy implementation. 

President Biden and Vice President Harris meeting with AI CEOs on the promise and risks of AI. This meeting and its followup commitments are examples of the types of tools the CTO team used to drive policy forward. 

Delving deeper on the team’s artificial intelligence work the US CTO team was deeply involved in President Biden’s work on AI. The team helped draft and launch the landmark Blueprint for an AI Bill of Rights. We spearheaded the Biden-Harris AI CEO convening that resulted in a set of commitments from the largest AI companies regarding AI. We led, hosted or participated in the various White House AI processes to create federal AI policy as well as subsidiary policies such as the National AI Research and Development Strategic Plan. We put forward the National AI Research Resource to ensure public sector participation in AI research and development. We also hosted the National AI Initiative Office, the federal coordination body for AI policy. That comprehensive approach to AI is similar to how we approached other policy areas.

Alondra Nelson leading a panel during the launch of the Blueprint for an AI Bill of Rights in October 2022. I was proud to have helped draft and achieve the internal consensus required to publish the Blueprint. It was a deep collaboration with the Science and Society Division.

There is still a ton of work to do and the leadership team now in place on the US CTO team is phenomenal. Deirdre Mulligan is the Principal Deputy US CTO and is someone I’ve wanted to work with – or for – for more than 20 years, Austin Bonner is Deputy US CTO for Policy, Wade Shen is Deputy US CTO for AI and leads the National AI Initiative Office, Denice Ross is now Deputy US for Tech Capacity, Dominique Duval-Diop is US Chief Data Scientist, and Nik Marda, the longest current serving member of the CTO team, is the Chief of Staff. Working with each of them, and the rest of the CTO team is what I miss most about having left the administration. Watching them take the team in new directions will be the best thing about sitting on the sidelines.

Zoom tiles from a meeting of the CTO team.

Our third US CTO, Megan Smith, sometimes joked that the CTO team’s job would be fundamentally different when there were as many tech experts in all the rooms as lawyers or economists. That dream imagines a government that always delivers services effectively, efficiently, and equitably on behalf of the American people. A government that understands, and can keep up with, technologies and the disruptions they create to mitigate harm and ensure that people can maximally benefit from our phenomenally innovative nation. I was privileged to be able to work towards that dream. 

P.S. Now is a critical time to come into government as a techie. The potential to make a deep positive impact on the lives of people is huge. It is also a time of tremendous opportunity because of President Biden’s genuine empathy in understanding people’s needs, as well as his focus and excellence in execution in delivering on their behalf.

If you are interested in getting involved, please consider applying to join the United States Digital Service, Tech Transformation Services, Presidential Innovation Fellows,US Digital Corps or the broader set of government technical jobs on the Federal Tech Jobs Portal.


One of my favorite views. When leaving around sunset, there would often be a murmeration of starlings near the edge of the South Lawn with the Washington Monument in the background.

The Last Four Years

I haven’t shared an update here on what I have been up to since the end of the Obama Administration, so now’s a good time to write some of this down.

Like many, I was extremely worried by Donald Trump’s election to the Presidency. Doing all I could to get to a different result in 2020 drove much of my work since 2017. I also tried to extend my non-profit work and grow my fundraising ability because I came to the sad realization that my fundraising ability is worth at least as much, if not more, than my strategic advice to the non-profits I care about (and I am not very good at fundraising).

All of the projects below were overlapping, and most were not full-time. They combined to be a ridiculous amount of work sometimes, and nearly no work at other times. The list below is in somewhat chronological order.

  • Thanks to Megan Smith and Tom Perez, I got to help bring in Raffi Krikorian to be the first Chief Technology Officer of the Democratic National Committee (DNC). Tom understood the value of tech and how badly Democrats were being beaten in this regard. He helped re-up tech within the DNC and the team has done amazing stuff under Raffi and Lindsey Schuh Cortes, and now Nellwyn Thomas and Kat Atwater.

  • Thanks to Mike Yang, Haley van Dÿck and Mikey Dickerson, I helped found Alloy.us and was its General Counsel for the first year. Alloy’s mission was to help improve data and technology for the progressive ecosystem. We were originally focused on legal data sharing among progressive organizations and campaigns, but once the Democratic Data Exchange (DDx) got up and running, and based on feedback from potential users, we shifted to improving basic data availability and update frequency, as well as voter registration. The wonderful Kendall Burman took over as General Counsel on January 1st, 2020. Alloy is now in talks with CiviTech to have them carry on our mission. I’m thankful for the incredible people we were able to hire, many of whom I had not met before, and for the fact that Alloy seems to have had a positive impact on the election and Georgia runoffs through its set of partners.

  • While working on the tech and data side for Dems and progressives, I also got drawn into, and did some funding of, and fundraising for, a wide set of organizations and candidates. One of the most amazing things to come out of 2016 was the incredible set of new organizations and candidates. The narrow 2016 loss to the person who would become one of the worst presidents in American history was also a catalyst for a bunch of more established organizations to do great work. I am really grateful to the many, many people who shared their wisdom and time as I was getting up to speed (I still am). I am also grateful to the folks who did the work of registering a record breaking number of new voters, helped to elect good candidates, including President Biden, and then protected those victories against the efforts to undo them. Before 2016 I had no idea of the richness and variety of people and organizations that are part of the big tent of the progressive community and Democratic party. It looks chaotic at times, but I was constantly blown away by the talented, committed people who work in politics. Many do so without much compensation or limelight, and nearly everyone I met shared a basic common purpose that was tied to getting real benefits for people.

  • Thanks to Eric Goldman, Adelin Cai, Clara Tsao, and Denelle Dixon-Thayer, I got to help found the Trust & Safety Professional Association (TSPA) and the Trust & Safety Foundation Project (TSF). I’ve wanted to do something like this for fifteen years but it took others to push for their formation.  I had the good fortune of getting to know some great Cognizant folks, including Kristen Titus and Davis Abraham, and many others while fundraising for TSPA and TSF. And then I got to talk to many old friends and make some new ones as we considered who would be the perfect first Executive Director. I couldn’t be happier that Charlotte Willner took the job. Being so immersed in a community I have been around for a long time was deeply satisfying. And the collective “FINALLY!!!” we heard on launching underscored to me that community building is important and should never be put off.

  • I’ve also spent a good amount of time working as a Board Member at Creative Commons and Data & Society. They are two great organizations from which I have learnt a lot. I am the Board lead for fundraising at both places, and so have been asking many of my friends for money to support their wonderful missions. Being a Board Member is also something I am still learning, and I have been thankful to get to watch others who are better at it than I, and to have such an intimate view of these two very different organizations during these exceptional times. I still also exist as a fanboy for the work that Creative Commons and Data & Society each do, so it is great to feel like I am helping them achieve their goals.

  • More recently, thanks to Yohannes Abraham, DJ Patil, David Recordon, Ginny Hunt, Mina Hsiang, Eric Hysen, Clarence Wardell and many others, I was lucky to get to work on the Biden Transition (and with my old friend and mentor Nicole Wong). That was a huge privilege and again underscored that working with a great team, on hard problems, with one clear purpose is a wonderful experience. I am so pleased for the folks who are going into government (and will try to write more about some of them soon). If you want to learn more about transitions, Transition Lab is a fabulous podcast and episode 46 (!!) is with  Yohannes.

I would not have thought in 2016 that I would have spent a large portion of the next four years in politics and political tech. Nor would I have thought I would have helped found three non-profits. So while I continue to be on the Board at Alloy, Creative Commons, Data & Society, TSPA and TSF, I really don’t know what the next of these updates will look like. After President Biden’s heartwarming inauguration, I am very hopeful for us all.

Finally, if you got this far and are wondering what to donate to in the new year with the hellishness of COVID still in full swing, please consider using this tool to find your local food bank and donate. I have a longer and slightly older list of COVID charities here, but know that many are having a very hard time getting basics like food and shelter, so please be generous.

First Amendment and Earlyish Content Moderation

This thread got long, so here is a perhaps more easily read copy of it:

One thing that came up on #InLieuOfFun that I didn't get the chance to answer was @klonick asking about whether the earlyish content moderation was based on "First Amendment Norms." I think the answer to that is a bit more complicated than it may seem.
1/

Am speaking from my experience at Google (outside counsel 2000-3, inside 2003-9) and Twitter (2009-13). Others may have used different approaches.
2/

By "First Amendment Norms" I take @Klonick to mean that the platforms were thinking about what a govt might be OK banning under 1st Am jurisprudence in the US.

Of course, the platforms aren't govt & 1st Am doesn't speak to what govts ban, only what they cannot. But still...
3.1/

To restate, "1st Am Norms" might be something like platforms ~only~ removing what was removable under US 1st Am jurisprudence ~and~ had been generally made illegal in the US (or elsewhere if doing geo-removals), irrespective of 47 USC 230.
3.2/

First, lots of content removal was simply not cognizable under 1st Am analysis. Spam was a significant issue for Google's various products & Twitter. I don't know of a jurisdiction where spam is illegal & it is unclear whether a govt banning it would survive 1st Am.
4.1/

Nevertheless, spam removal (both by hand and automated) was/is extremely important and was done on the basis of improving user experience / usefulness of the products.
4.2/

Similarly, nudity & porn were sometimes banned for similar reasons. Some types of products (video) might be overrun by porn and be unwelcome for other uses / users if porn was not discouraged through removal, especially early. And yet, the 1st Am is quite porn-friendly.
4.3/

There were also some places that might look like they fit 1st Am norms but were really the platforms deferring to courts. For example, a court order for the removal of defamation would result in removal (irrespective of §230 immunity).
5.1/

You can square that w/ 1st Am norms but the analysis was not based on what types of defamation or other causes of action the 1st Am would allow, but rather deferring to courts of competent jurisdiction in democracyish places.* <- this last bit was complicated + inexact.
5.2/

Where we refused, it was often about fairness, justice, human rights, or jurisdictional distance from the service, not the 1st Am per se.
5.3/

All of that said, I do think there were times when we look to the 1st Am (and freedom of expression exceptions more generally) to try to grapple with what the right policy was for each product.
6.1/

For example, understanding what types of threats we would remove from Blogger, we used US precedent to guide our rules. My memory is hazy as to why, but I believe it stemmed from two factors: (a) that we felt that we were relatively new to analyzing this stuff but that
6.2/

the Courts had more experience drawing those lines, and (b) that the Courts and Congress, being part of a functioning democracy, might reflect the general will of the people. These were overly simplistic ideas but that's my memory.
6.3/

In summary: while I think there is something to the idea that 1st Am norms were important, I think the bigger impetus was trying to effectively build the products for our then users -- to have the product do the job the user wanted -- within legal/ethical constraints. But...
7.1/

But, we did all of that from a particular set of perspectives (and that's what the 1st Am norms are probably part of) that was nowhere near diverse enough given the eventual reach and importance of our products.
7.2/

I'd love the read of others doing or observing this work at the time on whether I'm misremembering/misstating @nicolewong @goldman @delbius @jilliancyork @adelin @rmack @mattcutts @clean_freak @helloyouths @dswillner +many more + those who aren't on Twitter… (please tag more)
8/

And, in case you want to see the question I'm referring to, from @Klonick on #InLieuOfFun look here at minute 22:11 (though the whole conversation was good):
https://youtu.be/oYRMd-X77w0?t=1331
9/9