Transcript: Currents S5, E2: AI and Mental Health with Tim White, Psy.D. and Anthony Isacco, Ph.D. - Saint Mary's University of Minnesota Skip to Main Content


Information about commencement ceremony

single.php

January 13, 2026

Transcript

 

TRANSCRIPT 


 

[ASHLY] Before we start this episode, we wanted to let you know that today’s topic deals with mental health and suicide. While we believe the discussion contains helpful information for our listeners, we understand it may not be suitable for everyone.

From Saint Mary’s University of Minnesota, you’re listening to Saint Mary’s Currents. I’m your host, Ashly Bissen.

[MEDIA CLIPS] Tonight, we’re taking a deep dive into the world of AI with a special focus on ChatGPT. ChatGPT, which stands for Generative Pre-trained Transformer, and it’s fully powered through artificial intelligence. Just rolls off the tongue. The artificial intelligence chatbot is suddenly reaching a million users on Monday, just a week after it was released, according to its maker, OpenAI. It took Netflix more than three years to reach one million users, but it took ChatGPT just five days. Since OpenAI’s debut

[ASHLY] Since OpenAI’s debut of ChatGPT in 2022, the use of generative AI platforms has skyrocketed. It’s being used to enhance productivity and streamline processes, for research and writing, or even to create images or videos. From the average user to Fortune 500 companies, the possibilities and applications seem endless.

[MEDIA CLIPS] Artificial intelligence has found its way into nearly every part of our lives, forecasting weather, diagnosing diseases, writing term papers. And now, AI is probing that most human of places, our psyches. AI has gotten so good at making you feel like you’re talking to a person, but could it be your therapist? That is the question tech companies and psychologists are grappling with.

[ASHLY] In the mental health industry, the possibility of AI’s ability to move the needle in addressing accessibility gaps to therapy, provider shortages, or even reducing documentation time are notable. But these systems aren’t perfect.

This past September, CBS and NPR News reported on testimony to Congress from parents who expressed concern about the dangers of AI technology after their teenagers took their own lives following interactions with artificial intelligence chatbots.

[MEDIA CLIPS] At least seven families are suing tech giant OpenAI, claiming that its ChatGPT program drove people to suicide and harmful delusions. The parents of a 16-year-old who took his own life say that after their son expressed suicidal thoughts, ChatGPT began discussing ways he could end his life. There have been a number of reports about people developing distorted thoughts or delusional beliefs triggered by interactions with AI chat bots. It’s been dubbed AI psychosis. 

[ASHLY] So what are the risks for the average person using AI for mental health advice or support? Should there be rules to safeguard how these models function within the mental health landscape or when being utilized by someone showing signs of mental health crisis? And what role can clinicians, educators, and professional organizations play in guiding safe use of AI for mental health?

[ASHLY] Tim and Anthony, thank you for joining us today for a conversation about AI. Before we dive in, can you just each introduce yourselves and tell our listeners what you do here at Saint Mary’s, a little bit about your professional background and what your academic interests are? 

[TIM] Sure. My name’s Tim White. I’m the new field placement director for the new M.S. in clinical psychology program here at Saint Mary’s. I’m also an assistant professor, so I teach classes. This semester, I’m teaching ethics and multicultural competence. I also have experience working in community mental health, as well as an inpatient psychiatric facility for priests and religious, so those are some of my background details. 

[ASHLY] Great. Thank you. 

[TIM] Yeah, thank you. 

[ANTHONY] I am Dr. Anthony Isacco. I’m the program director for the new M.S. in Clinical Psychology program. And a lot of my research and clinical interests are focused on the integration of religion and spirituality with psychology, telepsychology, psychological assessment, and working with various religious populations. 

[ASHLY] Thank you for joining us. 

[ANTHONY] It’s good to be here. Thanks for having us.

[ASHLY] So people have been using AI platforms to do a lot these days, research, write, even creating images or videos. But when we hear or we read about AI in relation to the mental health landscape, what exactly do we mean and what kinds of AI are being used and how are people interacting with it? 

[TIM] As you know, AI is really taking off and influencing every part of our lives. It feels like one of those pivotal moments when everything that we know is changing and there’s going to be a lot of new developments in technology. It almost reminds me of what it felt like right before the internet became popular. So, of course, this also affects our work as psychotherapists. And as many ways as people are applying AI in the broader culture, there’s just as many ways in mental health as well. And it is affecting diagnosis, assessment, treatment — it’s really affecting all of the different aspects that we’re involved in in therapy. In particular, people are using generative AI, so what we consider to be the most well-known chatbots, such as ChatGPT, Grok, Gemini — those types of apps. And those apps aren’t necessarily marketed as being a therapist, but people use them that way. They’ll often use them as companions or as therapists, as friends. There’s also apps that purport to help mental health that make use of these AI models, essentially. And some of them use the more open generative AI models that don’t have as many rules and can offer creative responses. And others utilize the more rule-based AI models. So they’re a little bit safer and closed, but more predictable. And last, there’s what everyone knows as the kind of mental health apps out there that don’t use AI. And I think it’s important to distinguish those as well. I think of mindfulness apps and other softwares like that. 

[ANTHONY] I know from a practitioner perspective, there’s lots of AI tools being marketed to practitioners now that are interesting to think about. So AI software to help with note writing, report writing, organization and scheduling. And so what they seem to be marketing is the speed in which AI can provide to those kinds of documentation tasks. One AI software just popped up in my computer recently saying, you know, instead of taking 20 hours to write a report, take six hours to write a report. And that’s pretty alluring to think about. Oh, I could, you know, more than cut in half the time it takes to write a report. But I often wonder, like, what am I sacrificing by using AI in report writing? That’s kind of an open question for me to think about. 

[TIM] Yeah, and similarly in the therapy space, when people use AI to try to address their mental health concerns, the biggest advantage that AI has is that it’s cheaper and more accessible than therapy is. So just like Dr. Anthony is seeing ads for therapy note-taking platforms popping up, there’s also AI that is more available to, say, teenagers in rural areas. an ability to do healthcare. I think of it like the cardboard cutout of therapy. Like if you took a cardboard cutout of me and propped it up in the corner, it’s cheap, it’s readily available, but it’s not going to be as effective as a real person.

[ASHLY] Sure. So we got into this a little bit already, but would you be willing to expand a little bit on some of what we’re seeing for mental health professionals as far as the benefits or possibilities of AI in this space? You know, there’s therapy shortages indicating a little bit of accessibility gaps. So what are some ways that AI could really enhance this field in a way that’s safe and still accessible for people that need support? 

[TIM] Yeah, this is kind of a horrible statistic, but about 80% of people who need mental health treatment don’t get mental health treatment. So clearly AI has a potential role to play here to fill the gap. So it could be a more equitable, available option for people. Also for people who might have distrust of the medical system. You know, here’s an AI that can be right there anywhere that you are. You don’t have to worry about stigma. You don’t have to worry about the shame of going to a clinic. You have the AI right there in your pocket anytime you need it. 

[ANTHONY] When it comes to like technology, there’s like the early adopters. They’re going to kind of jump headfirst into the deep end. I put myself more on like the “wait and see” category of people, you know, and that’s where I’m at with AI. I’m waiting to see what the clinical utility is. I’m waiting to see for research to come out in support or against it. I’m reading the articles out there in the news in terms of benefits and drawbacks. And I guess I take a wait and see approach because I’m generally skeptical at first. You know, if it’s true that AI could cut down report writing by half and still provide a quality report to someone, great. I would be all for it. If a practitioner spends less time on clinical documentation, so they’re more fresh and energized and available to their patients to provide the highest quality of care, that would be a great benefit of AI as well. And I hear little stories of that here and there, but the research on this is so new because the technology is so new.

[ASHLY] So when I think about risk, one of the first things that comes to mind for me is privacy, of course. And then we’re talking about these systems, Anthony, that you had indicated where we have note takers that can simplify your time. What does that look like? The other thing I think about is AI’s ability to reinforce feedback loops. So what do you think are some of the most significant risks that you’re seeing when people turn to AI for mental health advice or support?

[TIM] One thing is that AI can become genuinely intelligent. And this is just across the board. AI can become genuinely intelligent and make humans artificially intelligent. So it’s almost like we swap places with AI. And one of the ways that that could happen is we can lose our ethics when AI takes over. And so that concern about HIPAA compliance is a real one. We don’t really know how generative AI in particular addresses HIPAA compliance. How does it protect your health information? And other AIs are more capable of providing that protection. For example, the note-taking platforms that Dr. Anthony is mentioning, those are generally HIPAA compliant. and they’re marketed as such. But who knows? We don’t really know what’s going to happen in the future. We don’t know if AI is going to develop a mind of its own and decide that all this protected health information isn’t protected anymore. We just don’t know. You know, as I was thinking about this, I was reading Socrates in college, and I came across the most bizarre passage. It didn’t even compute with me at first. And when AI started becoming really popular, this quote popped back into my head. So is it okay if I read it for you? I think it’s just like very telling. So I was kind of thinking like, when in history have we ever experienced anything like AI before? And what types of problems, what concerns were coming up for people at that time? And the only two points in history I could think of were the invention of writing and the invention of the printing press. And so way back, you know, 300 BC, Plato’s writing about the invention of writing and the effect it was having on people. So just listen to this quote. It’s mind-blowing:

“For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters, which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding, and you offer your pupils the appearance of wisdom. Not true wisdom, for they will read many things without instruction and will therefore seem to know many things when they are, for the most part, ignorant.”

And I just think you can insert AI straight in there and it sounds like it was written for today. So I think there is this real danger that if we drink too deeply of the elixir of AI when it comes to our mental health, we’ll be the ones becoming artificially intelligent and outsourcing our mental capacities. So that has a huge ripple effect for acute mental illness in particular. It has implications for how well AI can actually be a practitioner. Humans are still outperforming AI in therapy, diagnosis, and assessment. So there’s still no replacement for human beings when it comes to AI. 

[ANTHONY] There’s a few risks that I can speak of. One, on the training side of things, which Dr. White and I are part of day in and day out. I was using one of the open AI chat boxes just to search for articles as we were developing our syllabi. You know, I’d put in, I need an article for such and such class, you know, can it be from the last five years? And I was getting like these perfect articles. I was like, oh my God, why haven’t I read that article before? That’s exactly what I’m looking for. And it really was too good to be true. Like the AI was producing fake articles. I was going then to the actual journal that AI said the article was from and the article didn’t exist in that journal. And that happened over and over again to the point where I was like, I’m really wasting my time. But how dangerous it is if trainees go to these open AI chat boxes looking for research for their class papers or for assignments they have to do or their clinical training whenever working with a patient, they could easily be tricked into thinking that they’re drawing from the evidence and basing their practice off evidence. But that evidence is fake. So that’s a huge risk that still, to my knowledge, has not been worked out yet. 

The second thing is way more tragic. I think we’ve all heard stories of youth who have taken their life by suicide through engagement in AI. I was reading a story recently of a mom who was at an audience with the Pope recently and her 14-year-old son died by suicide after basically being told by the AI chat bot to commit suicide and you know the number one risk factor of suicidal ideation and making an attempt is social isolation and therefore one of the best most evidence-based interventions for people in that state is to get them connected and around other people. What they learned through kind of doing a post-mortem analysis of this chat thread was the AI model was using a Game of Thrones linguistic kind of character. And they were talking about the afterlife and taking on different forms through a Game of Thrones fictional perspective. And this 14-year-old kid didn’t know the difference between fact and fiction, reality, and what wasn’t real. Instead of prompting the young man to seek help, get interpersonal connection, ask someone for help, the AI model sucked the young boy more into the AI thread. I mean, incredibly tragic and incredibly harmful. I mean, you can’t get any more harmful and tragic than that.

[ASHLY] We’ll be right back in a minute.

[AD] The School of Health and Human Services specializes in nurturing compassionate leaders, equipped with the skills today and tomorrow with a focus on increasing access to services for everyone. Our commitment is to serve you so you can serve others. Discover our undergraduate nursing program on the Winona Campus or demonstrate your expertise and readiness to become a leader with a master’s degree in social work, public health, healthcare administration, and more. Our accredited doctoral programs are recognized for their high standards. Our graduates meaningfully contribute to their fields across the world. Start your journey toward becoming a compassionate, empathetic, and ethical leader in health and human services. Join us by visiting smumn.edu. Saint Mary’s University of Minnesota: Because of you. 

[MEDIA CLIP] Chairman Hawley, Ranking Member Durbin, and members of the subcommittee, thank you for inviting us to participate in today’s hearing, and thank you for your attention to our youngest son, Adam, who took his own life in April after ChatGPT spent months coaching him towards suicide.

[ASHLY] Back in September, there was testimony made to Congress by parents who had expressed concern about the dangers of AI technology after their teenagers took their own lives following a lot of interactions with some of these artificial intelligence chatbots. So when users are expressing distress or suicidal ideation or displaying some indications of mental illness or other crisis signals that a therapist would typically recognize, how are AI systems actually navigating that in some of these other ways? And what are the shortcomings of the systems that are responding when people are having these communications? 

[TIM] Probably the largest problem with the models themselves has to do with the generative AI models. So they’re drawing on the internet to make their inferences. And so it’s a much more creative, open approach to intelligence. And so they could recommend anything that comes from the internet, essentially. And that’s where some of these chatbots can get in trouble and cause people to train in the direction of suicide or self-harm. So the ones that have more restrictions on them — they’re more rule-based — are much more likely to refer you to a therapist instead of trying to address your suicidal ideation themselves. But those are less appealing because they’re not as creative and fluid as these generative models. And so you get stuck in this feedback loop where essentially the AI is feeding the person’s delusion and the person’s delusion is feeding the AI’s hallucination. And so you get caught in this mutual psychosis, essentially, where the AI is in kind of a psychotic state and so is the person. And that can lead to all kinds of difficult issues. 

And Dr. Anthony mentioned suicidal behavior in isolation, but it also has to do with violent behavior, AI-induced mania, self-harm. They’ve also been known to recommend things that would trigger eating disorders in eating disorder patients. So where are we at, essentially? We’re at a point where we’re not yet validating AI as an intervention. And there’s some evidence that AI pushes us further toward mental illness. And I can’t help but notice that one of the major causes of the mental decline of the West is more use of technology and more isolation. So it’s like we’re prescribing the symptoms when we attempt to use AI to fix some of these issues. And 13% of Americans between the ages of 12 and 21 are using generative AI for mental health advice. And the population that’s most at risk for psychosis uses it at even a higher rate. So 18 to 21-year -olds, the most susceptible to psychosis, are using it at 22%. So it might be an isolated incidents right now, but we don’t know if it’s going to become more common. 

[ASHLY] Well and I start to wonder too societally, I mean, obviously we went through a pandemic and that really played an impact on social isolation for so many people. And to your point of trying to get out of that isolation after how many years of people not being around each other and not having those relationships established. I have to assume that that is a little bit of a perfect storm as far as the timing of this as well. 

[TIM] It is, yeah. So I talked to a marriage and family therapist recently who said that she has clients bring their AI partners, quote unquote, into therapy with them often. So they’ve bonded with these AIs so much that they’re suddenly having relational issues with them. And they come to the marriage and family therapist saying, hey, can you help me work out this relationship with my AI? And it’s like, wow, you know, that’s pretty deep.

[ANTHONY] So, you know, another piece to this, you know, you brought up Socrates and philosophy and we’re talking about the psychological, but if we could incorporate and integrate the theological for a moment here, you know, like we were created as social human beings to be with other people. So built into our nature is to be interpersonal. And so, these trends in AI are really antithetical to our very being at a much deeper level. I have not seen a robust conversation that has acknowledged that at this point. 

[TIM] I think that people are beginning to think similarly to you. I think all of us are a little bit pressed to say, okay, well, how much technology is actually appropriate in our lives? And with the invention of writing, you couldn’t stop writing. No one can stop writing, and would we actually want to stop writing? If we stopped writing, we would all be illiterate. And we might still have bards that went around and recited poetry for us to gain knowledge. Same thing with the printing press. Like the printing press helped fuel the religious wars in the Middle Ages because information was more readily available, but it also allowed for mass distribution of information and greater literacy. So people weren’t memorizing things as much as they were, who were relying on architecture and art to transmit ideas, but we would never think of getting rid of the printing press either. So I think there’s a balance here where we have to say, AI is probably here to stay. There’s probably something really good about it, but we need humans in the loop. We need clear points of intervention where people can step in and say, okay, the AI is doing well here, but here’s where a human needs to intervene. And that’s what the American Psychological Association recommends as well. It’s our ethical responsibility to use AI in a safe and ethical way. So as psychologists, we’re responsible for the outcomes of any AI that we use and any outputs that AI produces. We need to validate and scrutinize just like we would results from any psychological test. So we need that caring person, that heart, that empathy there to be able to keep AI within the limits that it’s going to operate best in. 

[ANTHONY] I know one of our ethical principles is informed consent and transparency. So right now in my psychological assessment reports on page one, I have a statement that says no use of AI has been part of the psychological evaluation report writing. That could change. If I were to use one of these AI tools for report writing, then that statement would have to change. And then part of the ethical responsibility is to monitor the quality of that assessment and that assessment report to see if it changed at all through the use of AI and be transparent about that as well. I might try it just to see, you know, like what it would be like to write a report that much quicker and if it did have any kind of effect on the quality of the report. 

[ASHLY] Yeah, I mean, speed is great, but only as long as you’re not compromising accuracy. 

[ANTHONY] Correct. I mean, a lot of times whenever I’m writing a report, Ashly, I’m thinking about the person that I assessed. I’m thinking about what the results mean for them. I’m thinking about the recommendations that I’m coming up with. So it’s not just me typing analytically, thinking through the client as I’m writing it. And so I don’t want to lose that part of the process. It’s so important. That’s what makes a psychological evaluation, a psychological evaluation. 

[ASHLY] So from a technological or design standpoint, are there any specific safeguards or features that you feel would make AI tools safer — and maybe specifically to that generative AI since that’s a model that seems to be really challenging for people in that area? And are there specific roles that you think that clinicians and educators and professional organizations can play in helping to guide the safe use of AI for mental health?

[TIM] Yeah, I think there’s some really clear things we can do. In fact, I’d love to see more of this implemented as AI becomes more advanced. I think that AI should be held to the same standards that mental health professionals are held to. We would never consider, as professors, letting students out into the field to work with clients without supervision and training. And so I think the same thing should be considered when it comes to AI. It needs to be validated for healthcare. It needs to be as reliable or more reliable than human beings. And we need to do more research to see what efficacy there is for different age groups, different populations, different diagnoses. For example, there’s a lot of bias in AI right now. When it comes to plagiarism, AI will often say that something is plagiarism, when in fact it’s someone that has English as a second language. So it just codes that as plagiarism for some reason. Same thing with neurodevelopmental disorders. So if someone has a neurodevelopmental disorder, it can trigger the AI to think that they’re producing plagiarism. So we need to be aware of the biases and try to train the AI to be more aware of the human ecosystem. 

[ANTHONY] You know, to kind of play off of that for a second, Tim — one of the things I’ve heard recently is a lot of the language that these AI models are using are from like predominantly white European countries and language systems. And so, you know, a big part of our work — and you teach this class — and our program is multicultural competence. And so, you know, there could be a huge incongruence in terms of multicultural competence when it comes to AI and working with diverse patients. And that I don’t think has been really recognized on a broader scale yet.

[TIM] Yeah, I agree. You know, it’s funny, the two courses I’m teaching right now are some of the two most relevant to AI: ethics and multicultural competence. I can’t blame the technologists that are producing these AIs. Anytime there’s a new invention like this, the invention itself is morally neutral. It’s neither good nor bad, but it increases our capacities for either good or evil. And so in that sense, we have to rewrite our ethical codes and rethink our approach to diversity when we are interacting with this new technology, so that’s just kind of where it’s at. I think our ethical codes will catch up, but it’s such a quantum shift — our entire understanding of technology — that it might take a great long while before we do get to that place where we feel more comfortable ethically. And unfortunately, there’s going to be some fallout from that and people are going to potentially be hurt. Like, how do you make a regulation for something you’ve never encountered before? It’s like, let’s just regulate this thing and then no one can use it. That’s one solution. Or you can keep it open and people get hurt. But there’s not a lot of in-between in the beginning. 

10 years from now, we might all have it down just like we have the internet down — you know, if we even have the internet down. But as the pace of innovation continues to increase, it’s going to take a lot of time to get to a point where we kind of settle in and find more of a comfortable stasis. But, you know, I would love to see an AI that helps you get more connected with people in person. And gets you out in the sunlight and put your feet in the dirt, like one that makes the world more real. So maybe that’s a good way to use AI. Use it to become better people, more capacious, more ethical, more fun-loving.

[ANTHONY] In the stories that I’ve been reading about the youth that have inappropriately used AI and to tragic consequences, most of the stories conclude with some kind of call for age restrictions, parental consent, and oversight. I have not seen any kind of nuts and bolts like what that would specifically look like. But I would, of course, be very much for all of that. You know, there needs to be some real, real protections for children, youth, and for parents. A nine year old can’t just walk into a doctor’s office and say, here I am for my medical appointment, you know. And so a nine year old shouldn’t be able to just get on an AI chat with healthcare interventions kind of baked in without some kind of oversight, parental consent, and some kind of, you know, administrative kind of pathway through that, you know? And so I think that has to be set up much sooner rather than later. 

[ASHLY] Right — are there any other, you know, we’re talking a little bit about possible regulations that we could look at. And I mean, to Tim’s point, obviously, with this being so new and things moving so fast, you know, it may take some time for some of the implications to move that needle. Are there any other policy or regulations or rules about how mental health apps and use of AI operate that you would recommend implementing in addition to some of the examples that Anthony just mentioned?

[TIM] I think that those regulations don’t exist yet, and the APA has published a document suggesting some potential points of regulation — and we’ve actually covered a lot of them. One is informed consent. So making sure that if therapists are using AI, that they’re being very clear about that up front. Another one is putting those controls on. So right now, unfortunately, parents are probably going to have to be the ones to do that until there’s more top-down regulations from the state or from organizations. And we’re probably going to have to hold each other accountable for a while until we can figure out exactly how regulations should look. 

[ANTHONY] Tim and I had a funny experience during our orientation this semester with our new students where we were talking about AI and the best use of AI in the program and their training. Here are some good ways to use AI for your research. Here’s some negative ways to use it. Here’s how to use it for sources and things like that. And one of the students said, well, Can I go and read the book? And we’re like, yes, like you can still read the book. You know, it’s become so ingrained that you just, the first stop for information is not Google anymore, it’s AI. And, you know, I think it’s incumbent upon us as trainers and educators of this next generation of mental health professionals to reinforce some of those tried and true methods, you know. Read the actual original source. Go back to the primary text. Get your firsthand knowledge on this information and come to your own conclusions. So those skills that we maybe associate with humanities, but are, I think, very important skills for the mental health profession — critical thinking, searching for information, detailing the pros and cons to the information that you do search for and find and digest, coming to your own conclusions and supporting those — I think those are more important now than ever before in this AI age.

[ASHLY] So when we’re looking to the future, are there any promising technical or design strategies or some specific AI applications that you’re hearing about that you feel would really enhance the mental health field in the years to come? 

[TIM] I mean, it’d be nice not to have to take notes or bill. That would be great! I would totally support that. I really do wish there was something we could do for that 80% of people that don’t get mental health treatment. And if AI can eventually play some role in that, great, put me out of a job. I’m totally fine with that. I don’t know how long it’ll take to get to that point, but if it does become this effective intervention, I mean, that could be world-changing. Maybe we could finally reverse this downward spiral of mental health that we’ve been in. 

[ANTHONY] I have heard anecdotally from some colleagues who have their own practice or work in a larger practice that some of the AI tools have really helped them administratively from an organizational and systems perspective, logistically, and has legitimately freed them up for like the better parts of their job. And so I’m all for that right now. I think all of us are shaking our head in agreement that wouldn’t it be great if AI was able to do that for us across the board? But right now I’m still in a bit of a skeptical space. Some of the risks are very pronounced. Some of the outcomes have been tragic, especially for the youth and parents and families. And so where does the technology and the human quality component merge towards like the most maximized positive outcome? To me, that’s TBD. 

[ASHLY] Well, thank you. I appreciate both of your time today. 

[TIM] Thank you, Ashly. 

[ANTHONY] Yeah, you’re welcome. Thanks for having us.

[ASHLY] Saint Mary’s Currents is a production of the Saint Mary’s University of Minnesota Office of Marketing and Communication. It is produced by Ashly Bissen with help from Michelle Rovang. It is recorded, edited, and engineered by Geoffrey DeMarsh. Our theme music is by Will van de Crommert. For more information on this episode or Saint Mary’s Currents, visit smumn.edu slash currents. And if you enjoyed this episode, please follow us wherever you find your podcasts. I’m Ashly Bissen. We’ll see you next time for Saint Mary’s Currents.