Deploy Bravely — Secure your AI transformation with Prisma AIRS
  • Sign In
    • Customer
    • Partner
    • Employee
    • Login to download
    • Join us to become a member
  • EN
  • magnifying glass search icon to open search field
  • Contact Us
  • What's New
  • Get Support
  • Under Attack?
Palo Alto Networks logo
  • Products
  • Solutions
  • Services
  • Partners
  • Company
  • More
  • Sign In
    Sign In
    • Customer
    • Partner
    • Employee
    • Login to download
    • Join us to become a member
  • EN
    Language
  • Contact Us
  • What's New
  • Get support
  • Under Attack?
  • Demos and Trials
Podcast

Threat Vector | Privacy and Data Protection in the Age of Big Data

Apr 24, 2025
podcast default icon
podcast default icon

Threat Vector | Privacy and Data Protection in the Age of Big Data

00:00 00:00

In this episode of Threat Vector, host David Moulton speaks with Daniel B. Rosenzweig, a leading data privacy and AI attorney, about the growing complexity of privacy compliance in the era of big data and artificial intelligence. Dan explains how businesses can build trust by aligning technical operations with legal obligations—what he calls “say what you do, do what you say.” They explore U.S. state privacy laws, global data transfer regulations, AI compliance, and the role of privacy-enhancing technologies.

Want more from Daniel? Listen to his previous Threat Vector episode, Beyond Compliance: Using Technology to Empower Privacy and Security.

 


Protect yourself from the evolving threat landscape - more episodes of Threat Vector are a click away



Transcript

 

Daniel B. Rosenzweig: I've worked with regulators, I don't think regulators are really there to say I got you. I think if you can show them that you're putting reasonable effort in and you're taking this seriously and not just putting it to the side, that can be incredibly powerful for your business and allow you to innovate while at the same time consider the data points and business objectives you're trying to meet. [ Music ]

 

David Moulton: Welcome to "Threat Vector", the Palo Alto Networks podcast where we discuss pressing cybersecurity threats and resilience and uncover insights into the latest industry trends. I'm your host, David Moulton, Director of Thought Leadership for Unit 42. [ Music ] Today I'm speaking with Daniel Rosenzweig, founder and principal attorney at DBR Data Privacy Solutions. Dan is a recognized expert in data privacy and AI law, advising clients on compliance with major regulations like GDPR, CCPA, HIPAA, and the EU AI Act. With a deep technical background, he bridges the gap between legal marketing and technical teams, ensuring organizations translate complex legal requirements into actionable implementations. Today, we're going to talk about privacy and data protection in the age of big data, a crucial topic as companies collect and analyze vast amounts of user data while navigating a rapidly evolving regulatory landscape. But first, a disclaimer. The information provided in this podcast is not intended to constitute legal advice. All information presented is for general information purposes only. The information contained may not constitute the most up-to-date legal or interpretive compliance guidance. Contact your own attorney to obtain advice with respect to any particular legal matter. With that out of the way, here is our conversation. [ Music ] Dan, you've built a career at the intersection of law and technology, and now you lead DBR Data Privacy Solutions. What inspired you to focus on data privacy and AI law, and how do you approach bridging the gap between legal compliance and technical implementation?

 

Daniel B. Rosenzweig: Yeah, it's a great question. And again, thanks for having me. I'm really, really excited to be here. And big fan of the podcast, by the way, I listened to it routinely. It's a great, great resource for me and for others. So ultimately, I think it comes down to at least AI and data privacy and cyber in general that technology is the linchpin, right? So it's one thing for the law to say X or, you know, honor and opt out, or require-- give users the ability to request data that you have on them. That's great, and I think that's an important direction on what you're supposed to do as per the law, but actioning that is not easy. That's not something you can just build overnight, right? You need to have a product, you need to have engineers, you need to have technologists that can actually action and operationalize the law. So my goal and background in what I do is I can code, and I have a very deep technical background, is I will be that translation layer. I will work directly with legal for what they need to know from a legal perspective. I'll then work with the product and say, hey, you know, this is what the law says, and here are some methods and frameworks and strategies you can do to action that law. But I think that's where companies are actually getting into a lot of trouble these days to no fault of their own. It's not intentionally violating the law or anything of that nature. It's that their technology is not supporting their legal disclosures or representations because they're not actually technically actioning the legal requirements. And that is becoming more and more prevalent, especially with AI as an evolving space and very much so with data privacy and cyber.

 

David Moulton: Dan, when you think about the legislatures that are trying to pass these laws, do they have a command of technology or does it not matter? They're looking for an outcome, they've set the intent and you need to invent those technologies and interpret them based on what they want for the people they represent.

 

Daniel B. Rosenzweig: Honestly, I don't think it's just a bright-line rule. I think it's totally dependent on the legislator, dependent on the topic, but I think ultimately you're spot on and that I think they're really focused on a conclusion, on an outcome, right? And how can we get to that outcome? And sometimes the law is very prescriptive and says, hey, here's methods or ways to actually accomplish that outcome, or here's the technologies you can and should be using. And other times it's, again, like, you know, data privacy law is just honor a consumer's opt-out or exercise the right to delete, or things of that nature. But all in all, I think, you know, they're probably trying their best, but I think if you don't have the technical nuance and background, you're going to create ambiguity or create uncertainty and things of that nature and the technology in that time-- in that particular instance can be very powerful to get you to where you need to go.

 

David Moulton: Dan, with companies collecting and processing massive amounts of data, what do you see as the biggest privacy risk that organizations face today?

 

Daniel B. Rosenzweig: Yeah, so I think this is actually a pretty straightforward one in the sense of what the risk is, how to manage that risk is a different story and we can talk through that as well. But really do what you say and say what you do, right? Like have your-- you can have your privacy policies, you can have your public-facing statements, you can have your contractual obligations. There are a ton of different instances and mediums where you're making representations about how you're handling data, whether in the AI context or the data privacy context or whatever. But actually, again, making sure you're doing the things you say and honoring those statements and implementing the technology to support that is incredibly important and regulators aren't stupid and, you know, plaintiffs aren't either. And that's what they're really focused on, that low-hanging fruit. Hey, your privacy policy said you're going to honor my opt-out. Then we go onto the website, easily exercise the opt-out, and see, uh-oh, that's actually not happening or it's not working. So despite the disclosure saying you're doing it, the technology isn't supporting it. And finally, the risk that also comes with that is additional legal risk, right, meaning if you are not supporting the technology the way that you should be or implementing the requirements per the law, despite claiming that you are, that's a violation of the law in and of itself as an unfair and deceptive act under, you know, consumer protection law. So it's just really, really important to, again, do what you say and say what you do.

 

David Moulton: Daniel, recently I was at the South by Southwest conference. And one of the presenters was talking about privacy and how, as a consumer, you could protect yourself. And he made a claim, or he made a statement, I guess, that kind of surprised me. You think of data brokers as a specific category in the industry. He actually went further with it. He said, "Look, any organization that collects data, turns around and uses that as a revenue stream." Is that something that you think is accurate and that you've seen?

 

Daniel B. Rosenzweig: So I think there's two ways to look at this, and it's like any other thing, whether it's law or technology, it doesn't matter. I think there's the layman interpretation of what a data broker is, and I think there are certainly interpretations of that and how people discuss it in public, and that's totally cool and fine, then there's also the term of art as defined by the law, right? So data broker is a term of art defined by various different laws. And essentially, it's when a company is able to sell or acquire data that they don't have-- when they don't have a direct relationship with that consumer, right? So think about, you know, a publisher on a multimedia website, they're collecting data directly from the consumer, right? The consumer is interacting with their website, it's branded as that publisher's website. The consumer expects reasonably that that company is going to handle and collect certain data on them. Conversely, you know, a data broker would be an instance where that company has acquired that data without interacting directly with the consumer, right? The consumer is not necessarily volunteering the data directly to that entity or interacting with them in a way that would qualify as that direct relationship. And then that same company that's now acquired that data is either selling it, broadly defined, right, the definition of a sale is broadly defined, isn't always just for monetary consideration, to other downstream providers and parties. And in that context, those particular companies, those term of art, if you will, data brokers are now becoming more heavily regulated. There's a ton of laws now that speak to the specific issues.

 

David Moulton: So I want to come back to that regulatory environment in a minute, but I'm really curious to talk about AI and data protection. AI and machine learning models, they require this massive amount of data. How can organizations balance the innovation that they want to drive with privacy compliance?

 

Daniel B. Rosenzweig: Yeah. So, I would say first and foremost, there's probably a few ways to look at this. One, despite the name of my firm being DBR Data Privacy Solutions, we actually do a ton of AI work generally that have nothing to do with privacy. And the reason I say that is, and I think it's becoming a common misconception in light of how quickly this technology and the law is evolving in this space, is that there are AI-specific laws that have nothing to do with privacy. Yes, it's related, but it really is not necessarily, hey, if you're handling personal data, then, you know, this specific law applies. I will look at it instead of there's really-- generally speaking, of course, there are exceptions to this, but generally speaking, at least currently, there are really two ways to trigger AI requirements. So we have the AI agnostic laws, the AI-specific laws that, again, don't really speak to or really focus exclusively on data privacy. Think of the EU AI Act, the Colorado AI Act, things of that nature. Again, privacy is a component of that, but it's not the sole purpose of it, right? It's really just generally AI, specifically high-risk AI. Then you have the state data privacy laws, at least in the US. You also have GDPR, which is if you are handling personal data and using that personal data for AI purposes or automated decision-making, then you need to comply with the AI components of those data privacy laws, right? So these are two separate but related frameworks and mechanisms that need to be considered. So to get to the crux of your question, it really comes down to what data you're utilizing, right? So if you are using personal data in your training or your output or things of that nature, or it's having an automated decision, i.e. some sort of consequential decision on how a user or a consumer will be impacted, think of like employment recruiting, right? They send a bunch of resumes and then the AI system will ingest all those resumes and then let you know who you should be hiring. That would be an adverse impact and things of that nature, consequential. In those instances, you have different frameworks to follow, different regimes to follow, and different precautions that can be taken. But ultimately, I think the innovation part is key here. I think being able to continue to innovate by being aware of these requirements, really doing your due diligence, conducting your impact assessments to monitor the type of data that you're actually utilizing, having transparency, as I said before, "do what you say, say what you do," really making sure that your technology is monitored in a way that informs you of bias, unfairness, and things of that nature. I think there are definitely methods that can be implemented at the forefront to allow you to still innovate while at the same time be cognizant of what these various legal requirements are.

 

David Moulton: Are there specific legal practices or requirements that companies should be following when they're using AI for data processing?

 

Daniel B. Rosenzweig: Yeah, so again, I think there's two things to consider. One, first question is are you using personal data as part of AI? If you are, then I would consider the US Comprehensive State Privacy Laws, particularly laws like the CCPA and automated decision-making, and then implement the various requirements pertaining to that. So you have certain rights that you need to offer consumers. You have certain diligence practices. You need to make transparency, i.e. privacy policy disclosures, just-in-time notices, things of that nature. Then you have to assess whether or not-- and this is really done well through an impact assessment, right, you have an AI impact assessment or privacy impact assessment to address the types of data you'll be utilizing to see what's triggered. And if ultimately you determine that it qualifies as a high-risk or a system that can have an adverse impact or consequential decision on a data subject or a user, then you want to consider laws like the EU AI Act or Colorado AI Act, which really comes down to, I think, some core functions that can be routinely implemented are AI literacy or AI training, right? Make sure that the people within your business that are handling the data or using the AI system understand how it works, right, so that they understand what those requirements are, what data is going in. And also making sure your contracts are up to date, right? I think having your contractual protections as it pertains to data use, data ownership, training data, things of that nature. But I think ultimately taking the steps now can be very, very powerful. I've worked with regulators. I don't think regulators are really there to say, "I got you." And I think if you can show them that you're putting reasonable effort in and you're taking this seriously and not just putting it to the side, that can be incredibly powerful for your business and allow you to innovate while at the same time consider the data points and business objectives you're trying to meet.

 

David Moulton: So you've mentioned some of the regulatory bodies out there, some of the different acts, the EU AI Act, the GDPR. You know, as you think about that global privacy landscape and how it's rapidly changing with some of these laws, how should organizations stay ahead of those compliance requirements?

 

Daniel B. Rosenzweig: I would say one thing, a reason the importance of doing this documentation, this due diligence and methodology upfront is because AI is very different as it pertains to at least or relative to data privacy in that think about how these models are developed and trained and the resources that are put into them. It takes a lot of time, literal physical energy and power, and so many other elements and things that require a lot of time and resources. And if you end up building this on what is either deemed dirty data or you're doing it in a way that's not compliant, you're risking your business goals down the road. Some regulators have taken-- some, you know, have made public statements and things, I don't think we'll ever really know if they'll enforce this, but they've said that if you can't go in and remove your dirty data from your training model -- and we know how difficult that can be, right, once the model is trained -- then you arguably may have to delete the entire dataset or delete the entire model or things that, again, I think are pretty hyperbole right now, but that's just not a risk you want to take, right? That can be years of research and work that ultimately can have a massive impact on your bottom line and your business objectives.

 

David Moulton: What's your take on some of the state-level privacy laws that are in the pipeline right now here in the US?

 

Daniel B. Rosenzweig: Yeah, so right now, I think they're-- off the top of my head, I think there are 19 in effect. And I think they largely really focus on how consumers can have certain rights over their data and what businesses should and cannot be doing with that data, as well as the types of transparency requirements for the company, and things of that nature. I think ultimately, whether or not a company-- how a company wants to comply comes down to, I think, some resources. So ultimately, if you're a global company or even, you know, just a national company and have, you know, practices in every state, I think it's going to be important for you to figure out what makes the most sense for your business. If you decide that going on a state-by-state approach it works for you, I think that can be incredibly powerful. It can also help, you know, for monetization and things of that nature. But if you don't have those resources or you're not even necessarily in a big consumer-facing, it's more B2B and things of that nature, I think you can explore having a one-size-fits-all, at least from a country-specific approach, you know, maybe applying California as a baseline, notwithstanding some nuances across other data points, and there are things to consider surrounding sensitive data and things of that nature. But having that baseline can be very powerful for that company as well. It really depends on how they want to approach it. But data privacy is here to stay, and it's not going away, and it's a big part of what businesses should be doing, and I think it's an important step to be taking on how you want to handle your data. [ Music ]

 

David Moulton: Daniel, you've advised companies on privacy-enhancing technologies, or PETs. Give me an example of a PET.

 

Daniel B. Rosenzweig: Oh, yeah, differential privacy is a great one. Right? I think that ultimately, you know, again, there's the nuanced technology here, but broadly speaking, it's essentially just introducing noise into a dataset so that it's not necessarily as identifiable or personally identifiable to a user. And I think it's incredibly powerful. I think it allows companies to manage certain business objectives while at the same time not running afoul of some of their data privacy elements. The one thing I would say on PETs, they're incredibly powerful. I think they work very, very well. But at the same time, I also want folks to understand what they don't do, right? And what they don't do is, generally speaking, remove your obligations under data privacy law. Right? A lot of folks are like, "Hey, I'm using a PET." And then my question is, "Okay, great. Is it personal data going in and personal data going out?" "Yeah, yeah." Okay, well, then it's still personal data, and you still need to adhere to your requirements, you know, under relevant law. And a lot of companies will confuse or market that they're using PETs in a way that now remove their obligations under data privacy law. And I think that's where companies can, again, get into trouble based on what we were talking about a short while ago.

 

David Moulton: Let's jump into some of the ad tech and user privacy topics that have been top of mind for me. Digital advertising is under a lot of pressure right now to balance targeted advertising with user privacy. What do you think some of the biggest challenges are in ad tech right now?

 

Daniel B. Rosenzweig: So, a bit of a quick history lesson. It's actually interesting that a lot of these laws, particularly laws like the CCPA, were actually passed as a response to targeted advertising, right? So, I think you're spot on to kind of hone in on this and focus on this area because this is why a lot of privacy laws are where they are today as a response to targeted advertising and things of that nature. I think there's a couple things that right now is being a little more difficult for publishers and folks in the ad tech space, which is, one, ignoring the hype. It is amazing to me how many companies will come and try and approach certain things just in the name of using buzzwords and using technologies in a way that they think they have to, right? PETs are actually a really good example of that. PETs in privacy-enhanced technologies and things of that-- and even AI, right, you don't necessarily have to use those technologies to achieve a goal. And I think right now for targeted advertising in particular, a lot of companies are thinking we're going to implement these technical solutions that are going to mitigate our exposure for targeted advertising or replace targeted advertising. And while I think that can help, certainly it's not just, you know, apples to apples, right? I think you need to understand what is your business objective, what is your risk posture, what are you trying to achieve here, and how do we need to manage our own risk as it pertains to those business objectives. So I think, yeah, again, ignoring the hype is going to be really, really important. And finally, I would say as it pertains particularly to ad tech is, admit when you're using personal data. I think personal data has become such a "negative" word and it doesn't need to be. Like it's okay if you're using personal data for targeted advertising and you're using it in a way that fulfills a business objective while also giving consumers the ability to utilize your services. I think where companies are getting into trouble in the ad tech space is what we briefly talked about in the beginning of the chat, which is whether or not you're doing what you're supposed to do, right? So you're telling the consumer, "Hey, we use your data for targeted advertising. We are going to allow you to opt out of that if you want to. But please understand, here's what happens when you opt-out." And if they're going to exercise that right to opt-out, then honor that right to opt out, right? Make sure that you implement the technology to support that. And I think that is where ad tech companies and publishers in particular are getting in a lot of trouble. To no fault of their own, as I said at the beginning of the conversation, it's that they're implementing technologies to support targeted advertising or enable choice for consumers and then not actually configuring the technology in a way that fulfills those requirements. And then they're continuing to use targeted advertising when they shouldn't be.

 

David Moulton: So I want to go back to something that you said right at the opening, which was "say what you're going to do and then do it," right? I'm paraphrasing, I think, a little bit. But to me, that's the definition of trust. If I say I'm going to do something and I don't do it, don't trust me. And what you're saying here is like, yeah, if you establish that you're going to do something and you do it, that allows for trust, and then you can actually deliver that better experience. But if you violate the trust, then personal data becomes one of those words that you're going, oh, I don't want to be involved in that because it's a synonym for untrustworthy. Let's shift gears a little bit and talk about the FTC, which has been increasingly active in enforcing privacy violations. What should companies learn from some of these recent FTC actions and settlements?

 

Daniel B. Rosenzweig: So honestly, it's kind of like a broken record and I'm totally-- I think it's really important to continue to emphasize, it's the same thing, right? The FTC like other regulators are focused on the same stuff. Are you doing what you're saying? Are you saying what you're doing? Are you honoring your statements or actioning your statements? And really, I think where the FTC has been incredibly focused on, at least in the last administration, we'll see if this continues to be true now, is, again, the use of sensitive data, particularly a precise geolocation. And I think ultimately, if companies are going to fall within those categories and they're going to utilize that data, then make sure you're utilizing it in a way that complies with your relevant legal requirements. You know, there are some states in particular, as well as, you know, on the FTC side as well, where they'll expect that you're obtaining consent from the user to process and utilize sensitive personal data, right, like precise geolocation. So make sure you're having your adequate consents and things of that nature and honoring what you say you're going to be doing as it pertains to that sensitive data. So yeah, really, really important. We'll see where the FTC continues to go with the new administration. But ultimately, I'm not surprised by any of their enforcement activities, let alone the enforcement activities of state AGs as well, because they're kind of all following, you know, typically at least at a high level, a lot of the same stuff.

 

David Moulton: Daniel, let's talk about cross-border data transfer. With the evolving data transfer regulations, how should businesses handle cross-border data flows while staying compliant?

 

Daniel B. Rosenzweig: Yeah, so this is incredibly timely. Actually, right now in the US, we are now at a crossroads because we have now really implemented the first true federal data transfer framework coming out of the US, notwithstanding exceptions, but I would say this is one that kind of impacts various different industries, and there's two examples of it. One is the Protecting Americans Data from Foreign Adversaries Act, also known as PADFA. And this essentially prohibits certain data brokers, again, broadly defined, it's a term of art, as we talked about a little while ago. So don't assume that you're not a data broker just by virtue of the word "data broker." And it's unlawful for a data broker to transfer certain sensitive personal information, again, broadly defined, so sensitive can be broadly defined, to countries of concern. And those countries of concern are China, Russia, North Korea, and Iran. And I think China is really one of those examples of companies-- or countries that do impact a lot of companies in the US. Relatedly, we have the DOJ rule that is a new promulgated rule by the DOJ that similarly but differently it prohibits US companies to transfer bulk sensitive personal data of US persons to countries of concern and similarly to covered entities that are controlled or impacted by or influenced by those countries of concern. In this instance, it's Russia, Iran, China, North Korea, Venezuela, and Cuba. And I would say the distinction with the DOJ rule in particular is that it's not a data privacy rule. It is a national security rule. And I think this is impacting companies substantially. So I think the first step is you really want to make sure whether or not to assess if you're even sending bulk data to these countries of concern. Again, the one that would likely be relevant to most commercial, you know, global companies would be China, if I were to guess. And that includes Hong Kong and Macau, so something to really, really consider. And ultimately, certain mitigations and strategies that need to be put in place, because this isn't a joke, right? This is something that the DOJ has really put forward. And the DOJ rule in particular comes with both civil and criminal penalties. So taking those steps and assessments to see where your data's going, who it's being shared with, for what purpose, if it qualifies as sensitive data and fulfill-- or qualifies as a bulk data transfer, really, really important to work with your legal teams on this nuance issue to assess if you need to take necessary safeguards and mitigations to avoid what can be some pretty serious consequences.

 

David Moulton: Dan, looking ahead, what do you think the biggest development and privacy and AI regulation will be in the next five years?

 

Daniel B. Rosenzweig: What will be the biggest development in the next five minutes? I mean, it's amazing how much things have changed and continue to change. I think it will continue to be what you can do-- at least in the privacy space, what you can do with personal data as it pertains to AI. I think there's going to be some laws that speak to this a little more, you know, prescriptively to kind of align with how the AI systems are currently operating as it pertains to personal data, meaning can you use personal data for training purposes? If so, here's what you need to do to do that. Are there exceptions to that? Are there mitigations that can be put in place? And I think on the AI space, specifically agnostic to personal data, right now where we're seeing a lot of the laws focused-- are focused, particularly in the US, are AI systems that are deemed high risk? I think we're going to see, especially as the law continues to develop, and again, I've no way of knowing this, and I mean, the technology continues to develop, I think the law is probably going to be a little more, you know, focused on AI generally. I don't know if it's always going to be limited to just AI high-risk systems. I think it's going to be-- there's going to be requirements just to AI generally. And some of those meet pretty benign requirements that are easily mitigated or easily complied with. And others may be, you know, more stringent and will require a certain amount of time and effort to comply with those efforts. But I think that's ultimately where we're going to see things going.

 

David Moulton: Do you have any recommendations on what companies should be doing now to prepare for these changes?

 

Daniel B. Rosenzweig: Yeah, I mean, it's the same stuff we've been talking about. I think conducting those impact assessments are really, really important. Conducting the due diligence, document the steps that you're taking so that you can, you know, have a good record of what you're trying to accomplish and why you're trying to do it and to show good data governance and AI governance, I think is going to be really, really important. And continue to innovate, but just think about how you're handling the data and what steps you need to be taking to, you know, utilize that data. And then finally, implement training, right? I think training is really, really important. Teach, you know, your folks, and not to mention under the EU AI Act and other laws, I'm sure, will continue to follow suit. You're required to do that. You are required to use-- to conduct training and teach your users and your internal stakeholders on the impact of the AI system and how it should be operating. And I think that's just good due diligence and good hygiene for right now that can really allow you to innovate and mitigate any exposure down the road.

 

David Moulton: What's the most important thing a listener should take away from today's conversation?

 

Daniel B. Rosenzweig: Again, say what you do, do what you say. I think that's incredibly important, and it's an easy kind of rule to follow. Next, I would certainly technically validate, audit, and assess how your tools are operating from both an AI, obviously cybersecurity, and privacy, you know, tuck point to see that you're handling data in a way that doesn't run you afoul of that, "say what you do, do what you say," mentality. And then finally, definitely look into the data transfers. I think that's going to be-- it's a huge development in this space and the DOJ rule and PADFA in particular.

 

David Moulton: Dan, thanks for coming back on "Threat Vector." As usual, I am richer for this conversation. I know that our listeners deeply, deeply appreciate your expertise and your point of view at that intersection between law and technology. It's so important. Really appreciate you spending the time with me today.

 

Daniel B. Rosenzweig: Great. Yeah, no, I've really enjoyed it. Thanks for having me. [ Music ]

 

David Moulton: That's it for today. If you like what you heard, please subscribe wherever you listen and leave us a review on Apple Podcast or Spotify. Those reviews and your feedback really do help me understand what you want to hear about. If you enjoyed today's conversation, be sure to check out Episode 24, "Beyond Compliance, Using Technology to Empower Privacy and Security." Dan and I discussed how businesses can bridge the gap between legal and technical teams to navigate changing privacy laws. If you want to reach out to me directly about the show, email me at threatvector@ palowaltownetworks.com. I want to thank our Executive Producer, Michael Heller; our content and production teams, which include Kenne Miller, Joe Bettencourt, and Virginia Tran. Elliott Peltzman edits the show and mixes the audio. We'll be back next week. Until then, stay secure, stay vigilant. Goodbye for now. [ Music ]

Share page on facebook Share page on linkedin Share page by an email
Related Resources

Access a wealth of educational materials, such as datasheets, whitepapers, critical threat reports, informative cybersecurity topics, and top research analyst reports

See all resources

Get the latest news, invites to events, and threat alerts

By submitting this form, I understand my personal data will be processed in accordance with Palo Alto Networks Privacy Statement and Terms of Use.

Products and Services

  • AI-Powered Network Security Platform
  • Secure AI by Design
  • Prisma AIRS
  • AI Access Security
  • Cloud Delivered Security Services
  • Advanced Threat Prevention
  • Advanced URL Filtering
  • Advanced WildFire
  • Advanced DNS Security
  • Enterprise Data Loss Prevention
  • Enterprise IoT Security
  • Medical IoT Security
  • Industrial OT Security
  • SaaS Security
  • Next-Generation Firewalls
  • Hardware Firewalls
  • Software Firewalls
  • Strata Cloud Manager
  • SD-WAN for NGFW
  • PAN-OS
  • Panorama
  • Secure Access Service Edge
  • Prisma SASE
  • Application Acceleration
  • Autonomous Digital Experience Management
  • Enterprise DLP
  • Prisma Access
  • Prisma Browser
  • Prisma SD-WAN
  • Remote Browser Isolation
  • SaaS Security
  • AI-Driven Security Operations Platform
  • Cloud Security
  • Cortex Cloud
  • Application Security
  • Cloud Posture Security
  • Cloud Runtime Security
  • Prisma Cloud
  • AI-Driven SOC
  • Cortex XSIAM
  • Cortex XDR
  • Cortex XSOAR
  • Cortex Xpanse
  • Unit 42 Managed Detection & Response
  • Managed XSIAM
  • Threat Intel and Incident Response Services
  • Proactive Assessments
  • Incident Response
  • Transform Your Security Strategy
  • Discover Threat Intelligence

Company

  • About Us
  • Careers
  • Contact Us
  • Corporate Responsibility
  • Customers
  • Investor Relations
  • Location
  • Newsroom

Popular Links

  • Blog
  • Communities
  • Content Library
  • Cyberpedia
  • Event Center
  • Manage Email Preferences
  • Products A-Z
  • Product Certifications
  • Report a Vulnerability
  • Sitemap
  • Tech Docs
  • Unit 42
  • Do Not Sell or Share My Personal Information
PAN logo
  • Privacy
  • Trust Center
  • Terms of Use
  • Documents

Copyright © 2026 Palo Alto Networks. All Rights Reserved

  • Youtube
  • Podcast
  • Facebook
  • LinkedIn
  • Twitter
  • Select your language