Search
Contact
Login
Share this article
In this week’s episode, BAL’s Edward Rios and Christopher Barnett discuss AI implementation and adoption in the global mobility arena.
Plus, the latest immigration news including an update on litigation against the Keeping Families Together program.
This podcast has been provided by the BAL U.S. Practice Group.
Copyright © 2024 Berry Appleman & Leiden LLP. All rights reserved. Reprinting or digital redistribution to the public is permitted only with the express written permission of Berry Appleman & Leiden LLP. For inquiries, please contact copyright@bal.com.
Episode 99: AI-doption trends in corporate immigration
This episode of the BAL Immigration Report is brought to you by BAL, the corporate immigration law firm that powers human achievement through immigration expertise, people-centered client services and innovative technology. Learn more at BAL.com.
In this week’s episode, BAL’s Edward Rios and Christopher Barnett discuss AI implementation and adoption in the global mobility arena. Plus, the latest immigration news.
From Dallas, Texas, I’m Rebecca Sanabria.
The impact of artificial intelligence in our personal and professional lives grows exponentially day by day.
According to a recent Forbes study, 64% of businesses expect AI to increase productivity and 42% expect it to streamline job processes.
As AI technology continues to revolutionize various industries, it is transforming the way companies operate and significantly impacting how human resources and mobility professionals recruit and work with foreign national employees and manage in-house immigration programs.
BAL partner Edward Rios from the Boston office and senior associate Christopher Barnett from the Dallas office joined the BAL Immigration Report prior to the presidential election to discuss AI adoption trends in corporate immigration.
Rios: Chris, we are talking about AI and its implementation and adoption within global mobility, and there is so much to unpack here.
We’re in November, so I will try to refrain from any references to pumpkin spice or anything else that’s happening in New England. It is a holiday season, but what comes at the end of the year is something I find very interesting, which is quarterly reviews leading into annual business reviews. I’d like to start us if we can with that angle.
Looking at AI, after having had several QBRs lately and talking about not only what are teams doing today, but what they’re planning to do and AI coming into the mix, I thought this would be a great topic and opportunity for us.
Let’s see what we can deliver for folks looking at it from a practical use level of what’s happening, the practical implementation of AI. I think that always leads into a policy discussion at some point. Whether it’s policy within companies or policies between countries, what we focus on here at BAL is how to employ top AI professionals at world- leading companies.
Quick note: About a year ago at WERC I was on a panel talking about the implementation of AI and which way it would go. I know Kelli Duehning from our San Francisco office and John Hamill in New York just spoke at this year’s WERC on AI as well. My takeaway from that is, it is heading in the direction that we thought that AI agents — not simply a blind use of these technologies — but very focused use of the analytical capabilities, the summary capabilities of these technologies is coming to the foreground. That’s going to be the usage that I think is the most important as we go forward, so I’m glad to see that that trend continued as we thought it would.
We’re really at the very beginning. I think we’re at the moment where Edison did the light bulb or soon thereafter. I heard this somewhere, so I don’t want to take full credit for it, but we have no idea what the usage of AI is going to be. We’re just at that nascent spot, and I think that means that today most of our discussion will be on what we’re seeing people try to use it for, what the experimental use of it is and what direction that’s going to take us into the policy discussions.
We did a survey — shout out to Victoria Ma here in Boston, who helped craft a fantastic survey. We have the results of that survey, and I know Chris, you’ve been looking at it. Tell me what your thoughts were, what jumped out at you when you looked at the survey. We’ll take it in that direction.
Barnett: Thanks, Edward. I’m excited to be talking about this with you today. You had posted about the survey, and I had a lot of really interesting thoughts about some of the content that came out of that survey. The thing that jumps out the most to me is that nearly 75% of those companies polled are using AI in immigration programs in some way. That’s a pretty high number.
As you talked about, we’re kind of at that nascent point now where people are using it. Maybe they’re all doing it a little bit differently, but they’re finding ways to try to use it, even in those more traditional programs like immigration, where you wouldn’t expect that as much. What did stick out to me, though, was some of the concerns over the use of AI in these immigration programs.
The top three reasons we saw concerns over were accuracy, data privacy and security, and then immigration compliance, or, “Is the law being applied correctly?” You posted about this, Edward, but I wanted to get your thoughts a little more on those top concerns and what you thought about that.
Rios: I think with any new technology, you’re going to hear most of these same concerns, right? Does it work the way we intended? What’s going to happen with this data and information that’s flying around? And then, is it going to comply for wherever it’s being applied? To me, the privacy framework is an interesting one because I remember — I don’t want to date myself here — but any of us who’ve been practicing for a couple of decades remembers the expansion use of email, especially in devices pre-iPhone practice, post-iPhone practice, Blackberries, all that.
I don’t see it too dissimilar from the concerns of email and privacy. While that’s still a concern, we have lots of systems in place to protect the security of email transmissions. I think AI is going to be the same thing. It’s not unfamiliar. We’re going to put the framework around it to protect privacy and information. I see that as a concern and I see it as an old refrain. It’s a concern that comes up with all new technologies. I see that we’ll solve that one.
So, less concerned about the privacy if people use it properly, just like email, nothing new there. Accuracy — nobody likes hallucinating AIs to come up with gibberish, that’s something that’s new for us. But I think the professionals who know what they’re doing can read through what the content is that we’re feeding these systems and understand what the outputs are going to be and are checking it, will make sure that the decisions and the outputs that the AI systems give us are sufficient.
I’ll give you an example. Playing with AI recently — specifically, it was Notebook LM. It’s a great, great test product that you’re seeing come out from Google. Fed it a bunch of notes from folks that were going to be doing a panel here soon within the firm, and it summarized those notes amazingly well and created a 10-minute podcast going through all of the information that we shared. Most of it was accurate, but not all of it.
We were able to tweak those notes, refeed it into the AI and make sure that the output was what we wanted it to be. I think accuracy is going to be a human effort. We’re going to have to make sure that those systems are doing what we intend them to do.
Barnett: Great points. I think you also touched on this a little bit, but at this point, the cost is really unknown, right? It’s not being monetized really in any given way. This is definitely a future area to watch, especially as generative AI takes over.
I just saw recently that OpenAI had raised another $6 billion in funding, but they’re not expected to be profitable for another five years. People are still figuring out exactly how to use this, how to monetize this. But we’re seeing some really interesting case use in how AI is being deployed amongst different companies and changing some of these different industries.
I’ve attended a few conferences recently and saw some ways that these are being deployed. There was one where AI is being used as a safety management tool in construction programs, reporting on worksite incidents and generating compliance summaries.
I saw another example where hospital charts and clinical notes are being summarized succinctly using AI, saving the hospital staff time and effort in summarizing those things. And then, just recently, OpenAI launched a ChatGPT web search. That sounds great.
I do like the idea of AI-summarized search results, but there’s something to be said for searching through results yourself to find that right content that applies, instead of relying on AI to both interpret that search and interpret it correctly. I think you touched on that a bit with that accuracy. And that brings up that question of, you still need professionals to apply the search results correctly, to apply the law correctly.
I’m working on a case myself now where it’s for cybersecurity training and identifying vulnerabilities. They’re using AI to do that, but it still requires a lot of manual review in such an important area as cybersecurity. That brings up another question I’ve been thinking about, which is, how are companies looking to retain the top talent or bring the top talent to the United States if they are not originally from the U.S.? Edward, what are you seeing in terms of AI talent in the U.S. versus the rest of the world?
Rios: I couldn’t agree more with your summary there. The human in the loop is going to be there for a while — and I think it might be shorter than we think — but, for the time being, check back in with us next year when we go through this again. But that necessity to ensure the accuracy is there, and the application is there, is important and I agree.
As far as talent, talking about the human in the loop, the people that are making all of this happen, the U.S. is still the top destination for the world’s most elite AI talent, but I can tell you just from the data that I’m seeing that it’s shrinking. Folks are going to other places. There was a great study recently — it’s the Macro Polo. We’ll put the link in the show notes here.
It’s a fascinating graphic showing the origins of folks, where they’re coming with their educations in the United States, undergrad or graduate, where they’re flowing and pursuing their professional efforts. The U.S. has 60% of the top AI institutions, and it’s remained a leading destination for that talent, but it is shrinking. I think you saw that Macro Polo graphic as well.
Barnett: They covered everything from institutions to countries flowing to and from the U.S. Really interesting, highlighting also that in 2019, 18 of the top 25 institutions for AI were in the United States.
In 2022, it’s down to 15 in the U.S. The U.S. still has 15 of the 25 top institutions for AI, but the rest of the world is catching up. They do have a lot of other great institutions, great programs, great innovation around the rest of the world — China, Europe, India, Canada, U.K. — these are all great places where we see a lot of that kind of innovation coming from.
It’s at all levels too. We’re seeing this in Ph.D. researchers — people who complete their Ph.D. programs, 77% of them in 2022 stayed in the United States. Prior to that in 2019, it was 86%. We’re seeing some people leave the U.S. in search of different challenges abroad. Keeping that top talent can be difficult, but the U.S. is making a specific focus on that, which is great to see.
What are some of the ways that you’re seeing the U.S. make that focus, Edward?
Rios: Most recently, there was a shift from the executive order from the Biden administration to specific guidance on how we’re going to attract that AI talent as a national interest. You see shortcuts and easier paths for folks to be able to remain in the United States.
If they’re pursuing their Ph.D., for example, at an elite institution or wherever, and want to continue here, we want to make sure that there are legal pathways for them to be able to do so, as opposed to having an antiquated structure that’s essentially forcing some of this talent to go elsewhere. We’re just not going to be competitive. You’re seeing that degradation already from pre-COVID 2019 to the stat you’ve cited for 2022. It’s real.
I think if we allow it to continue to deteriorate over the next five, seven years, we’re going to be behind the eight ball for global talent. So what are we doing that shifts over to the policy discussions at the national level?
We are hopefully going to pursue that guidance in whatever administration comes next.* I think it would be a focus that would require legislative change if we’re going to actually make a change into the laws that will allow us to bring these folks over and keep them. For now, it’s stopgap measures like national interest waivers and following the guidance that allows us to use the current legislative structure to bring these folks in and allow them to stay.
I think what you’re seeing here is a shift from companies competing for this talent, where you’ll see large tech companies or startups trying to attract these folks to the larger, if you aggregate and extrapolate out, it’s not just competition between companies. Now it is competition between countries.
That is a much more serious discussion point that I think we’re going to watch evolve over the next little bit.
Barnett: That’s interesting, that shift between companies versus countries competing for talent. There’s been a lot of talk about that at the policy level, at the government level of this digital solidarity versus digital sovereignty, trying to share resources and being on the same wavelength as other countries as they develop their own ethical safeguards around AI.
We’ve seen that on the U.S. side, this commitment to digital solidarity, and it’s coming out in the forms of promoting ethical standards around AI, these safeguards that help ensure that innovation still remains at the forefront. But just behind that is making sure that there’s national security and ethical safeguards in place surrounding that.
You mentioned NIWs a bit — national interest waivers — that popularity is growing. It’s certainly a stopgap by some means to divert from a different green card path, but we’re seeing the U.S. take that on a bit more in the expansion of the critical and emerging technologies list. Can you tell us a little bit more about what the U.S. is doing there?
Rios: They’ve expanded the list, essentially, in 2024 to include more AI areas. What that will allow is for folks to pursue a path to the green card for those who qualify, and these AI professionals and top candidates right now will have a chance because that list is expanded and allowed us to say, hey, these are talents that we need. We want to make sure that it’s understood that this is a critical need within the country, so let’s pave the way so that talent can stay.
Barnett: Just recently, there was another AI executive order released further directing government agencies to identify and prioritize other ways to bring that talent to the U.S. Not expecting too many changes in the near future, but if there are changes, there could be reductions in consular processing. There could be faster pathways to get vetted if you’re trying to come to the United States. We may see a lot of these government agencies cooperating together so that this AI talent can both come to the U.S. or remain in the U.S. if they’re already here.
Rios: Excellent points, Chris. And that just underscores the need for us to be able to be and remain competitive globally for this talent as well as what we’re seeing at the company levels. We’re seeing the talent go back and forth between leading companies in the AI space.
This topic is constantly evolving, as we’re seeing. It’s amazing to think of what was in place just a year or two ago from where we are now. Please check back in and keep an eye on the space, and we’ll continue to report on what we’re seeing in the global mobility arena.
Read BAL’s AI report “AI-dopting AI in corporate immigration: What HR and mobility professionals shared in our industry survey” at bal.com.
And now, the top U.S. and global immigration news.
A federal judge in Texas ruled against the Keeping Families Together program, determining that the Biden administration did not have legal authority to grant parole in place to undocumented immigrants already in the U.S. Per the district court ruling, U.S. Citizenship and Immigration Services will no longer be able to resume approving parole requests under this program.
A White House spokesperson stated, “We strongly disagree with yesterday’s rulings and we are evaluating next steps.”
More changes to immigration policies are expected early next year. Don’t forget to join our government strategies team next Tuesday, Nov. 19, for “Countdown to inauguration: Preparing your immigration program for the first 100 days of a new administration.” Look for “Events” on bal.com and register today!
In global news, the Canadian government announced new rules for multiple-entry visas, giving officials more discretion on validity and entry limits. The multiple-entry visa allows the holder to seek entry from any country as often as necessary during the visa’s period of validity.
The U.K. government announced its acceptance of the Low Pay Commission’s recommendations on the rates of the National Minimum Wage, including the National Living Wage. Effective April 1, 2025, the National Living Wage will rise 6.7%, from £11.44 to £12.21 (which is about 15.70 in U.S. dollars).
In the UAE, the Emirati government announced that the grace period for foreign nationals on expired residence permits to resolve their immigration status has been extended from Oct. 31 to Dec. 31.
Find all of our news at BAL.com/news. Follow us on X at @BAL_Immigration. And sign up to receive daily immigration updates in your inbox at BAL.com/newsletter.
We’ll be back next week with more insights from the world of corporate immigration.
I’m Rebecca Sanabria. Thanks for listening.
*This podcast episode was originally recorded on Nov. 1, 2024.
The BAL Immigration Report is provided by BAL. Copyright 2024 Berry Appleman & Leiden LLP. All rights reserved.
Digital redistribution to the public is permitted only with express written permission of Berry Appleman & Leiden LLP. This report does not constitute legal advice or create an attorney-client relationship. Visit bal.com for more information.
The U.S. Embassy and Consulates in India have centralized processing of nonimmigrant visa (NIV) interview cases and made changes to…
In our last episode of the year, Jonathan Nagel provides an advisory update on the new European travel systems, and…
The New Zealand government announced significant changes to the Accredited Employer Work Visa (AEWV) in 2025. Key Points: The changes…
The Australian government introduced the National Innovation visa (NIV) (subclass 858), officially replacing the Global Talent visa and the…