We've just finished the fiction series, so it's time for another meta episode!
Dr. Brooke Macnamara is an associate professor of psychology at Purdue. She works on expertise, skill acquisition, and achievement, and we have a great conversation about how expertise research works (and where it falls short), talent identification, and much more.
Links:
- Dr. Macnamara's faculty page
- The Skill, Learning, and Performance Lab
- Deliberate Practice and Performance in Music, Games, Sports, Education, and Professions: A Meta-Analysis
- Range: Why Generalists Triumph in a Specialized World
- The Tyranny of Talent
Credits:
- Theme music: “Puzzle Pieces” by Lee Rosevere. Available for use under the CC BY 4.0 license, at Free Music Archive
[00:00:09] Welcome to Ten Thousand. I'm your host, Ben Scofield, and this is a podcast about expertise.
[00:00:13] It isn't about experts though, it's about the journey. My goal with Ten Thousand is to talk to
[00:00:18] people at all stages, from absolute novices to world-class performers, to find out how they
[00:00:21] think about what they do and what it means to excel, and especially how what they think changes
[00:00:24] as they get more experience. With that said, let's get into it. Welcome to Ten Thousand. I'm really
[00:00:34] excited to be speaking this episode to Dr. Brooke Macnamara, who is an Associate Professor
[00:00:39] of Psychology at Purdue, where she's the Principal Investigator at the Skill Learning and Performance
[00:00:43] Laboratory looking at skill acquisition, expertise, and achievement. Welcome.
[00:00:47] Thank you for the invitation.
[00:00:49] Yeah. I think I first ran across your work about ten years ago with the mental analysis
[00:00:56] that you co-authored on the deliberate practice paradigm. I think that we'll have a great conversation
[00:01:01] about expertise and about the psychological research on it. To get started, and I guess
[00:01:09] it's probably the biggest question I have, is what are the prospects that you think there
[00:01:13] are for a general theory of expertise? I think we can move in that direction. What a general
[00:01:22] theory of expertise is going to need is an understanding that complex human performance is complex.
[00:01:30] And so any model that we have can't be a simple explanation such as, oh, just practice 10,000 hours and
[00:01:38] you'll become an expert at anything you want. So we need to consider individual traits, whether that's
[00:01:46] physical abilities for some domains, cognitive abilities, personality traits, motivation.
[00:01:53] We need to think about experiential factors such as practice and training and types of training, maybe in
[00:02:00] fields other than the one in which you're studying. So diversity of experience, how those two things
[00:02:07] interact with each other. And then also how there's going to be interactions with the environment from sort of low
[00:02:16] level, the type of task it is and the demands of the specific task, whether it's creativity or processing
[00:02:24] information quickly. So if the task is static or dynamic, as well as if there's performance pressure,
[00:02:31] the personal environment, and then societal structures such as opportunities and barriers.
[00:02:35] So I think any general theory has to consider lots of elements, which makes it difficult. But I think we
[00:02:45] are now moving towards, or at least some people in the field are moving towards considering some of
[00:02:51] these nuances and multiple factors. Although this is somewhat in contrast with many things that get put
[00:02:59] into sort of the self-help literature, because it sells a lot better if it's sort of this one pithy
[00:03:05] argument that is the new secret to success.
[00:03:08] This is the pill you take. Yeah, this is the pill you take to be the best Rubik's Cube solver or something.
[00:03:14] Right.
[00:03:14] I don't know that that would be a particularly top selling self-help book, but the idea.
[00:03:20] Right. So do you think that this direction in which we're moving, or at least the research is
[00:03:25] moving, if not the popularized version of it, is it globally applicable to all domains of human performance?
[00:03:31] So I informally categorize expertises into more purely cognitive, so like theoretical physics, creative,
[00:03:40] fine art, professional, which is not purely cognitive, like it's a little more generating of stuff than purely
[00:03:48] cognitive, and then competitive, so sport and games and things.
[00:03:52] Do you think that the same, well, if you have a multi-factor theory, then you can actually like
[00:03:57] play with the weights, right? But do you think there are any domains that might
[00:04:03] be harder to fit into this direction of research?
[00:04:06] Well, I think a good theory should be able to account for those facets of the task. And exactly
[00:04:14] as you were saying, sort of weight, what are the demands of it to be able to apply it to the whole
[00:04:21] model? But to that end, tasks are really different, right? So people are very different and tasks are
[00:04:29] really different, which again is a reason that you can't have sort of this one, oh, it's just practice.
[00:04:35] So for example, Robin Hogarth has work talking about kind environments and wicked environments,
[00:04:42] right? So if you can repeat the task exactly the same way, practice is more likely probably to
[00:04:51] reap more benefits from doing the same thing over and over again. But there are a lot of tasks where
[00:04:57] that's not the situation, where you're not just repeating it, you're having to tie new ends together
[00:05:04] or adjust to a situation. So that is almost by definition going to rely on different types of
[00:05:14] predictors. So perhaps, you know, cognitive ability is more so than just straight experience. So I think
[00:05:22] when we think about different domains or different tasks that we're interested in, it's really useful to
[00:05:30] think about what those demands are, maybe what has predicted enhanced performance or expertise. And that
[00:05:39] should inform a model, just in the same way that if we have a grand theory, then
[00:05:48] subdomains or practices should be able to look at that and test at least aspects of the theory,
[00:05:56] if not the whole theory all at once. Gotcha. So do you see some of the research directions coming out
[00:06:04] of like naturalistic decision making as competitive to the multi-factor model or part of it?
[00:06:11] Oh, definitely part of it. I think we need to consider those different types of task environments. And
[00:06:20] it's just going to be, are we waiting more practice, for example? Are we waiting cognitive abilities,
[00:06:26] for example, more in terms of their predictive power in predicting expertise? And then it gets more nuanced
[00:06:33] than that, right? So what type of practice and training? Which type of cognitive abilities? In what
[00:06:39] circumstances? So no, I think that all needs to inform each other. Because if you have a model of
[00:06:47] just these very clean, almost laboratory types of expertise, well, that's not very generalizable.
[00:06:55] And there are, of course, some real world tasks that are very repetitive and that you can repeat,
[00:07:00] but it will be limited in how generalizable and useful it is to these other domains. I mean,
[00:07:07] one could make an argument that there should be a separate theory for each domain, because nothing
[00:07:15] will generalize because each task is different. And I said, you know, tasks are different and people
[00:07:19] are different. But I think it might be more useful research-wise to think about how they're different.
[00:07:28] What are the factors that make them different? Do some of these factors maybe apply to other domains that
[00:07:35] you wouldn't think? Maybe on the surface, these two tasks don't look similar, but maybe they have the
[00:07:41] same types of demands for processing information quickly, for example. So maybe there's some overlap
[00:07:48] in some ways, of course, not in all ways for an air traffic controller and a simultaneous language
[00:07:55] interpreter. These are the types of questions that my research is focused on. And I think are the
[00:08:02] interesting questions about how we can start to look at the characteristics of the task, the characteristics of
[00:08:09] the task environment, and which variables might be the best explanations for success in these different
[00:08:17] circumstances. Okay, great. Yeah, so there are a couple things there. One is, what are we looking for
[00:08:25] our theory for? Is it to predict how well things will go? And I know you've done some research on talent
[00:08:31] identification more recently. So is it to predict which, you know, juvenile will become the world
[00:08:38] champion of tennis or what have you? Or is it to explain why someone's performance was at a certain
[00:08:44] level? Or is it to build programs to accelerate the development of whoever in whatever domain? I think
[00:08:53] it's a really interesting avenue to pursue looking at the sub characteristics of the tasks and the domains
[00:08:59] to find where we can take lessons that are maybe well established in one and apply it to another.
[00:09:08] Yeah, for some reason I have speedcubing on the brain. And the idea of taking a very few seconds to
[00:09:15] look at the current state of the world and then apply a library of algorithms very quickly is obviously
[00:09:23] that's how speedcubing is done. But that feels like those are capacities, learned or not, that could then
[00:09:31] be transferred to other domains, interestingly. And you could build programs to help people more
[00:09:37] accurately decide which algorithms apply on the fly sort of things. Interesting.
[00:09:42] Right. Yeah. So to the latter part of that question, there's newer research and even a pop book by David
[00:09:53] Epstein, Range, suggesting that this diversity of experiences might be predictive of performance,
[00:10:02] right? So as you're saying, this ability to look at the state of the world and then apply a library of
[00:10:07] algorithms. Now, those specific algorithms are going to be specific to speedcubing. But that ability or
[00:10:15] thinking of the way to solve a problem potentially could be applied elsewhere and might make that
[00:10:23] cuber the one who comes up with the solution somewhere else that others without that experience might not,
[00:10:30] even though the specificity of that library is almost certainly going to be different.
[00:10:37] But if you can represent a problem space as a library of specific algorithms, right, where the
[00:10:44] the difficult part of the bottleneck to performance is rapidly figuring out the right ones to do,
[00:10:49] maybe that's the skill that is transferable somehow. Right. Absolutely. And of course, we need the
[00:10:54] research to make these types of tasks and in what circumstances is that the situation? So in terms of
[00:11:02] the first part of the question, is it to predict, is it to explain, is it to develop training programs?
[00:11:09] Um, you know, I think different people will have different goals in the research that carry out.
[00:11:15] I'm interested primarily in predicting. In terms of developing training paradigms, it needs to
[00:11:27] take into account what those predictors and explanations are, and then also figure out how that gets applied.
[00:11:34] And if training is not only the best way to develop expertise, but how that is implemented is going to
[00:11:46] matter quite a bit. Right. So we expect, for example, so there's sports training programs around the world.
[00:11:54] Um, and funnily enough, they rest on a view that is sort of both the idea that it's all about practice,
[00:12:06] but also the idea that it's all about talent, which seems like it should be conflicting with one another.
[00:12:14] And obviously, I don't want to overgeneralize. Not every program is, is doing this or thinking in,
[00:12:20] in both ways. But most of them, what they do is they look at fairly young children and adolescents,
[00:12:26] and the ones who are performing the best, they select, so talent identification, talent selection.
[00:12:34] And then they attempt to accelerate their performance by additional training. So expanding the,
[00:12:42] the amount of training that they're doing and specializing it to a degree. So they're both,
[00:12:46] and maybe not explicitly, but relying on this idea that early performance, um, must reflect some sort
[00:12:57] of innate ability and to get better, we need to do training. And some of my more recent research
[00:13:05] looking at sports is suggesting that that is probably not the best approach. So when we look at early
[00:13:13] performance, so early to junior level athletes who are performing at the international stage,
[00:13:20] rarely are they performing at the international stage as adults. And likewise, it's not just that
[00:13:26] that that's a necessary precursor. So when we look at Olympians, most of them didn't look as impressive
[00:13:33] as their peers when they were younger. So it doesn't seem that this early identification,
[00:13:39] looking at who is the best when they're young is the right way to go about selection. And then when we
[00:13:49] take young athletes and we accelerate the amount of training that also seems to be predicting short
[00:13:57] term success, but not long-term success. So Olympians were more likely to have, uh, played multiple other
[00:14:06] sports when they were younger, had that training and actually had less sports specific training than their
[00:14:13] national class counterparts. So it seems like we're trying to balance short-term success and long-term success,
[00:14:18] but most of these programs are looking at short-term success and maybe forgetting to look at the long-term success
[00:14:27] rates.
[00:14:28] Yeah. I think, um, so range is specifically about the broad range of experience resulting in more high
[00:14:34] level athletes. I think, uh, Joe Baker's recent book, the tyranny of talent is also relevant here. He talks
[00:14:39] about essentially the fallacy of the child who performs at a high level will become the adult who
[00:14:46] becomes at a high level for a lot of reasons. Right. And I think there are, I think there's some interesting
[00:14:50] research on, uh, training responsiveness, right? So there are a couple of different things that talent
[00:14:54] could be, it could be, you are already at a high level or your performance curve is very steep. So you get better
[00:15:00] faster than other people do, uh, or your ceiling is higher, higher than other people's, right? There's sort of
[00:15:05] three levels of talent. And the only one you can see as a kid is as a snapshot for a child is where their initial
[00:15:13] level is. Um, and if you just focus on that, then, I mean, that's also, I get what the relative age effect and, oh,
[00:15:20] you're three months older and you're the best hockey player on the team. And so you will be forever, always.
[00:15:24] Right. Exactly. Which obviously those factors don't matter then in adulthood. And then you also have,
[00:15:31] you have lots of predictors that develop at different time points and at different rates
[00:15:37] in childhood and adolescence. So not only physical maturation, um, but cognitive abilities, social
[00:15:44] maturation. There's a lot that comes online at different times for different individuals and progresses
[00:15:51] at different rates that will be very predictive of short-term performance potentially, but just
[00:15:57] then doesn't play a role long-term. Yeah. Yeah. And I think the research on child prodigies also bears
[00:16:03] on this too, right? Where they're not always, they don't stay prodigies forever, uh, much of the time.
[00:16:07] So typically not. Yep. Yeah. Um, okay, great. Uh, so we talked a little bit about taking insights or skills or
[00:16:17] skills or learnings from one domain and applying another and transfer in general, um, taking that meta a little bit.
[00:16:22] Are there domains of research that you think have insights that could be brought to bear in expertise research?
[00:16:31] Sure. So going back to sports, so sports is a really nice empirical test bed for, for multiple reasons.
[00:16:40] Sports are engaged in around the world by lots of people. So you have a high base rate. It's hard if
[00:16:46] you're looking at a skill that is performed by a hundred people in the world or something like that,
[00:16:52] then what is expertise and it's relative to a small number, but if more people were doing it,
[00:16:57] would those people be experts? You can't tell sports, very high base rate for the most part.
[00:17:04] They're pretty clear indicators of achievement that there's not say for art. And then a lot of people
[00:17:11] are highly motivated to succeed. So that's also useful as opposed to if it's something where
[00:17:17] some people are good at it, but nobody really wants to do it, then you kind of don't know how many
[00:17:21] people would be good at it if they tried. Um, and then unlike a lot of areas where there might be
[00:17:30] an initial sort of gate to get into it. Um, although I want to come back to that because there,
[00:17:37] there certainly are barriers, but sports is something that if you're not the best kid, you can still play.
[00:17:43] And usually you're playing at these lower levels, but if you're doing better, you can then move to
[00:17:49] a higher level. You know, if you're at the sport at the state level, you could move to the national
[00:17:55] level and vice versa. There's ways to, to sort of enter sideways and that continues for quite a long
[00:18:03] time that there's that ability that you could come in. Now, of course there are barriers. Some sports
[00:18:09] are very expensive to get into. And this idea that playing multiple sports might be particularly
[00:18:15] productive then adds another barrier. So which parents can afford for their kids to play multiple
[00:18:21] sports, especially depending on the sport. So it's certainly not perfect, but it offers a really nice
[00:18:27] empirical test bed for, for much of what we want to look at in expertise. Yeah. Well, and I think the,
[00:18:33] the relatively objective measures of success is also why certain games pop up a lot, right? So chess is
[00:18:39] the canonical example with ELO. I think video games are more frequently showing up for exactly, I mean,
[00:18:45] ELO has been adapted to, to competitive matchmaking and video games. So that's actually an interesting
[00:18:49] thing I've noticed in sort of my reading. I see psychology and, and research on expertise from
[00:18:55] the psychology side focused on sport and, and competitive games. If I look at the expertise
[00:19:01] research coming out of philosophy, it's often more well, a lot of it is on moral expertise,
[00:19:06] which is actually, I think a completely different, like it's like you are a moral expert in the same
[00:19:10] way that you can be a chess expert. But it's more, they're more thinking about cognitive expertises
[00:19:15] and you have little pockets of people focused on creative expertise. So I think Drake does work
[00:19:21] with children's drawings and accuracy or representation, and this is sort of off the beaten path, but do you
[00:19:27] have a sense for the ease of measurement is a problem for expertise research? So it's too focused on those
[00:19:35] where success is objective or relatively objective?
[00:19:39] A hundred percent. That's, it's definitely a difficulty. So I don't want to say that, that there shouldn't be
[00:19:46] a way to look at these more objective or less objective domains. It's tricky, right? So if we want to look at
[00:19:56] something more scientific, then we need clear objective outcomes. And so that might be the best starting point
[00:20:04] in terms of where the research goes. There might also be ways to try to quantify performance in some of
[00:20:12] these other domains. Now people can certainly argue whether that's the best way to quantify performance.
[00:20:19] Sure. Sometimes you might be looking at, okay, what do just people agree upon as the best chef or the best art
[00:20:27] or the best writing? There are awards that you can look at in terms of voting. There's some sort of
[00:20:34] consensus, but it is ultimately harder and there can be more argument about what the best marker for
[00:20:40] achievement can be in those sorts of fields, but they still need to be looked at. And so I think if we can
[00:20:49] as much as possible, if researchers can come up with ways to quantify achievement more objectively,
[00:20:57] then the more it can be researched from an objective standpoint.
[00:21:01] Mm-hmm. I wonder sometimes, so with writing, I've been speaking to novelists and short story authors
[00:21:08] most recently, and it is very difficult to disentangle good writing from the publication system,
[00:21:18] right? Because that is the first pass at objective success would be, oh, which book sold the most copies?
[00:21:24] But that's so far from the quality of the book in a lot of cases. I wonder if that's a critical flaw,
[00:21:34] and partly because of the incentives of the research system and getting actual results reliably
[00:21:40] makes people not want to study those things because it's just so hard to measure.
[00:21:45] It is. It's very hard to measure. And even you see this in sports to perhaps a lesser degree,
[00:21:53] right? So the incentives are to look at the short-term performance. The incentives are not to,
[00:21:59] well, let's just wait for 10 years and then we'll pay the coach according to that. No, we look at
[00:22:06] what it is right then. Well, and the coach has to be paid all those 10 years too, so.
[00:22:09] Right, right. So yes, I mean, it's sort of understandably a deterrent to expertise researchers
[00:22:17] because you could come up with your way of looking at it and people quite understandably say,
[00:22:25] well, there's all this bias in it. And there is, there's a ton of bias in it. There's a bias in
[00:22:32] who can achieve success in a lot of different ways. It's some that are fairly universal and some that
[00:22:39] are specific to the field, but that foot in the door is a problem in a lot of places. So will
[00:22:47] it get studied less? Well, probably. So that means it's harder to know what those predictors are
[00:22:55] or the explanations, at least in the way that I tend to conduct research. So it's hard to have that
[00:23:03] included in that general theory that we talked about, at least in the first instantiation of it.
[00:23:10] Sure. Yeah, I don't want to be too negative on expertise research and the prospects for it in
[00:23:15] general, but I do want to ask another sort of related side questions and then we'll get back to
[00:23:20] being proponents again. Are there common mistakes you see in expertise research beyond sort of the
[00:23:29] focusing on domains that are easier to measure? Mistakes might not be the right word,
[00:23:37] but you do see biases. So you see people who have their pet theory and so they,
[00:23:45] it can appear that they might be looking for evidence to support that theory and disregarding
[00:23:52] evidence that doesn't support that theory. So for example, I don't know of any sort of outright
[00:24:01] mistakes that say Erickson made, but there were decisions that were made in terms of how work was
[00:24:09] analyzed that, that one could argue sort of put his finger on the scale. So, you know, I wouldn't call
[00:24:18] those mistakes, but, but there are decisions made by expertise researchers sometimes to show that their
[00:24:27] theory is the one that works. And this might be amplified if they are, for example, making money
[00:24:34] from their theory. Sure. Yeah. And that's not specific to expertise research. I think that's
[00:24:40] true sort of broadly as we see maybe too often. Yeah, I think with Erickson in particular,
[00:24:48] I, if I remember correctly, one of the issues was that in response to different criticisms, he would
[00:24:55] sort of manipulate the definition of deliberate practice. So whether a coach had to be present or
[00:25:00] not, or if the exercises could had to be designed by a coach specifically or things like that.
[00:25:07] Yeah. So, so not necessarily a mistake, but certainly a, a, a move that made it harder to evaluate
[00:25:15] the truth or not because, because he, he sort of vagued up the theory a little bit, uh, in response
[00:25:20] to certain criticisms. Right, right. He kept changing the definition depending on what people would say.
[00:25:27] So it became, it, it got to the point where if there was support for a study, then he would say,
[00:25:34] we studied deliberate practice. But if people looked at the same study and said, well, actually it didn't
[00:25:40] really predict a lot of performance. And he'd say, well, then it wasn't deliberate practice.
[00:25:45] Right. So once you start doing that, it does get hard to evaluate. Yeah.
[00:25:51] Yeah. Okay. Um, so, so sort of switching gears and who are, or what work is going on right now that
[00:25:57] you're most excited about? So I'm excited by, well, work that I'm doing, but I should be the case,
[00:26:10] right. If you're not excited about your own work, then you probably should be doing it.
[00:26:14] So one of the things that I'm looking at is how characteristics of the task interact with
[00:26:22] predictors of performance. So for example, um, I alluded to this earlier, I'm looking at
[00:26:30] when you make a task, say static versus dynamic, which cognitive abilities are most important in
[00:26:39] predicting learning and performance. So in that one, for example, I'll back up to say that this is a
[00:26:47] study. So it's not real world. Uh, we have a computer task in the lab where we manipulate the
[00:26:53] characteristics and then have people sort of play this computerized game repeatedly to see,
[00:26:59] look at their learning rates, their starting points, their learning rates, and their apogees,
[00:27:03] if they get to that point. And the idea being, if we can see it in the lab, then we might be able to
[00:27:09] translate that into real world tasks that fall more or less on that static to dynamic continuum.
[00:27:16] Because of course, in the real world, there's nothing that is completely static other than
[00:27:21] maybe being art model, but then you can get sort of, there's a little bit of, uh,
[00:27:27] change in the environment to quite a bit. So I mentioned before air traffic control, uh,
[00:27:34] simultaneous interpreting where just that information is in constant flux. So looking at that, what we're
[00:27:40] seeing is that reasoning ability seems to be important both for fairly static kind of turn taking
[00:27:47] type tasks and then the same task when it's in flux the whole time, but processing speed,
[00:27:54] how quickly you can process information only matters in that dynamic environment. And so it's a new way
[00:28:01] of looking at tasks and potentially jobs, right? So jobs tend to be categorized, uh, based on their
[00:28:09] domain. So is it computers? Is it it it is a transportation? Um, but we don't tend to think about,
[00:28:17] uh, tasks falling on these different continua, right? So every task falls somewhere on the side to
[00:28:24] dynamic continua. Every task falls somewhere from highly predictable to not very predictable. And
[00:28:30] every task falls somewhere on how sequential processes are versus how simultaneous they are.
[00:28:36] And so by splitting these up and isolating these characteristics, we can hopefully start to get a
[00:28:44] sense of what these tasks demand. And then hopefully we can get to a point where we can start combining
[00:28:51] them and seeing what the combinations demand that will be more applicable to real world tasks.
[00:28:58] Okay. Interesting. Yeah. I can see why that would be fun to pursue. I also wonder,
[00:29:05] uh, so once you have sort of a taxonomy of task characteristics, um, you can apply it in a bunch
[00:29:10] of different places and you can see what sort of capabilities people have is predictive. And then
[00:29:16] you can also look at different practice and training methods to see, uh, so I think of capabilities as
[00:29:24] not task or domain specific. So like cardiovascular endurance, not super task specific,
[00:29:29] working memory capacity, not super task specific. Um, and then skills are things that are, are domain
[00:29:36] and task specific. Uh, so you can figure out which skills can match the sub characteristics of tasks
[00:29:42] independently of the domain of application. Yeah. Nice. So this is maybe a little late in the
[00:29:51] conversation to have it, but do you have a working definition of expertise?
[00:29:55] Hmm. That's a good question. So typically I'm looking at it in terms of performance differences.
[00:30:05] So you can think about expertise as the whole range of performance, or you can think of an expert as being
[00:30:15] someone who is at that top of that range. Right. So you do need to be careful in how you were using
[00:30:20] that phrase. So when I'm talking about expertise, um, I try to be careful of whether I am talking
[00:30:28] about that whole range. So meaning relative differences across people in their performance
[00:30:34] at that moment, when I talk about an expert, um, again, I try to say more along the lines of
[00:30:46] world-class performance, if that's what I mean, or elite performance and then define it. I actually end
[00:30:53] up not using the word expertise or expert very often because everyone has a different definition for it.
[00:31:03] Um, so to that end, I don't have a great definition or if I have one, I try to be very explicit about
[00:31:10] how I am using that term in this case. Gotcha. Yeah. I think, uh, I am very on board
[00:31:18] specifically with avoiding expert. I don't like using the word expert because I think it elides
[00:31:22] to the fact that expertise is a continuum and you, you still have expertise, even if you're not very
[00:31:27] good at something, it's just, you don't have much of it. Uh, I think in the, the folk meaning of, uh,
[00:31:31] folk, the way people use it. Um, uh, I've been talking with people with more or less expertise
[00:31:36] where it's something, uh, I mean, it's, it's not purely performance-based because you could have a
[00:31:42] hurricane come through and interrupt your soccer game. And, and so you perform poorly because the
[00:31:46] wind is whipping or there are external factors that affect performance as well, but it's something,
[00:31:51] um, I mean, the way I think of it is expertise is the set of acquired factors that contribute to
[00:31:58] reliably superior performance and the reliably superior can get bigger or smaller to get more expertise
[00:32:03] and the quality of the factors that you've acquired. I think, um, yeah, I don't know. I think one of my
[00:32:11] problems reading expertise research early on was studies of novice expert differences where the
[00:32:16] experts had an hour and a half of practice on the, the, the computer game that they were playing in the
[00:32:21] lab and that made them experts compared to the people who did it for the first time that afternoon.
[00:32:25] Um, so yeah, great. Okay. Uh, well again, thank you so much. I've really enjoyed this conversation
[00:32:31] and I think people will get a good bit out of it. So thank you for your time.
[00:32:35] You're welcome. Thank you. Thanks again to Dr. Brooke McNamara. And thank you for listening.
[00:32:43] We'll be back soon with another episode of 10,000.
