Navigating Data Privacy

Explore top LinkedIn content from expert professionals.

  • View profile for Beth Kanter
    Beth Kanter Beth Kanter is an Influencer

    Trainer, Consultant & Nonprofit Innovator in digital transformation & workplace wellbeing, recognized by Fast Company & NTEN Lifetime Achievement Award.

    521,873 followers

    This Stanford study examined how six major AI companies (Anthropic, OpenAI, Google, Meta, Microsoft, and Amazon) handle user data from chatbot conversations.  Here are the main privacy concerns. 👀 All six companies use chat data for training by default, though some allow opt-out 👀 Data retention is often indefinite, with personal information stored long-term 👀 Cross-platform data merging occurs at multi-product companies (Google, Meta, Microsoft, Amazon) 👀 Children's data is handled inconsistently, with most companies not adequately protecting minors 👀 Limited transparency in privacy policies, which are complex and hard to understand and often lack crucial details about actual practices Practical Takeaways for Acceptable Use Policy and Training for nonprofits in using generative AI: ✅ Assume anything you share will be used for training - sensitive information, uploaded files, health details, biometric data, etc. ✅ Opt out when possible - proactively disable data collection for training (Meta is the one where you cannot) ✅ Information cascades through ecosystems - your inputs can lead to inferences that affect ads, recommendations, and potentially insurance or other third parties ✅ Special concern for children's data - age verification and consent protections are inconsistent Some questions to consider in acceptable use policies and to incorporate in any training. ❓ What types of sensitive information might your nonprofit staff  share with generative AI?  ❓ Does your nonprofit currently specifically identify what is considered “sensitive information” (beyond PID) and should not be shared with GenerativeAI ? Is this incorporated into training? ❓ Are you working with children, people with health conditions, or others whose data could be particularly harmful if leaked or misused? ❓ What would be the consequences if sensitive information or strategic organizational data ended up being used to train AI models? How might this affect trust, compliance, or your mission? How is this communicated in training and policy? Across the board, the Stanford research points that developers’ privacy policies lack essential information about their practices. They recommend policymakers and developers address data privacy challenges posed by LLM-powered chatbots through comprehensive federal privacy regulation, affirmative opt-in for model training, and filtering personal information from chat inputs by default. “We need to promote innovation in privacy-preserving AI, so that user privacy isn’t an afterthought." How are you advocating for privacy-preserving AI? How are you educating your staff to navigate this challenge? https://lnkd.in/g3RmbEwD

  • View profile for Marc Beierschoder
    Marc Beierschoder Marc Beierschoder is an Influencer

    Most companies scale the wrong things. I fix that. | From complexity to repeatable execution | Partner, Deloitte

    146,184 followers

    𝟔𝟔% 𝐨𝐟 𝐀𝐈 𝐮𝐬𝐞𝐫𝐬 𝐬𝐚𝐲 𝐝𝐚𝐭𝐚 𝐩𝐫𝐢𝐯𝐚𝐜𝐲 𝐢𝐬 𝐭𝐡𝐞𝐢𝐫 𝐭𝐨𝐩 𝐜𝐨𝐧𝐜𝐞𝐫𝐧. What does that tell us? Trust isn’t just a feature - it’s the foundation of AI’s future. When breaches happen, the cost isn’t measured in fines or headlines alone - it’s measured in lost trust. I recently spoke with a healthcare executive who shared a haunting story: after a data breach, patients stopped using their app - not because they didn’t need the service, but because they no longer felt safe. 𝐓𝐡𝐢𝐬 𝐢𝐬𝐧’𝐭 𝐣𝐮𝐬𝐭 𝐚𝐛𝐨𝐮𝐭 𝐝𝐚𝐭𝐚. 𝐈𝐭’𝐬 𝐚𝐛𝐨𝐮𝐭 𝐩𝐞𝐨𝐩𝐥𝐞’𝐬 𝐥𝐢𝐯𝐞𝐬 - 𝐭𝐫𝐮𝐬𝐭 𝐛𝐫𝐨𝐤𝐞𝐧, 𝐜𝐨𝐧𝐟𝐢𝐝𝐞𝐧𝐜𝐞 𝐬𝐡𝐚𝐭𝐭𝐞𝐫𝐞𝐝. Consider the October 2023 incident at 23andMe: unauthorized access exposed the genetic and personal information of 6.9 million users. Imagine seeing your most private data compromised. At Deloitte, we’ve helped organizations turn privacy challenges into opportunities by embedding trust into their AI strategies. For example, we recently partnered with a global financial institution to design a privacy-by-design framework that not only met regulatory requirements but also restored customer confidence. The result? A 15% increase in customer engagement within six months. 𝐇𝐨𝐰 𝐜𝐚𝐧 𝐥𝐞𝐚𝐝𝐞𝐫𝐬 𝐫𝐞𝐛𝐮𝐢𝐥𝐝 𝐭𝐫𝐮𝐬𝐭 𝐰𝐡𝐞𝐧 𝐢𝐭’𝐬 𝐥𝐨𝐬𝐭? ✔️ 𝐓𝐮𝐫𝐧 𝐏𝐫𝐢𝐯𝐚𝐜𝐲 𝐢𝐧𝐭𝐨 𝐄𝐦𝐩𝐨𝐰𝐞𝐫𝐦𝐞𝐧𝐭: Privacy isn’t just about compliance. It’s about empowering customers to own their data. When people feel in control, they trust more. ✔️ 𝐏𝐫𝐨𝐚𝐜𝐭𝐢𝐯𝐞𝐥𝐲 𝐏𝐫𝐨𝐭𝐞𝐜𝐭 𝐏𝐫𝐢𝐯𝐚𝐜𝐲: AI can do more than process data, it can safeguard it. Predictive privacy models can spot risks before they become problems, demonstrating your commitment to trust and innovation. ✔️ 𝐋𝐞𝐚𝐝 𝐰𝐢𝐭𝐡 𝐄𝐭𝐡𝐢𝐜𝐬, 𝐍𝐨𝐭 𝐉𝐮𝐬𝐭 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞: Collaborate with peers, regulators, and even competitors to set new privacy standards. Customers notice when you lead the charge for their protection. ✔️ 𝐃𝐞𝐬𝐢𝐠𝐧 𝐟𝐨𝐫 𝐀𝐧𝐨𝐧𝐲𝐦𝐢𝐭𝐲: Techniques like differential privacy ensure sensitive data remains safe while enabling innovation. Your customers shouldn’t have to trade their privacy for progress. Trust is fragile, but it’s also resilient when leaders take responsibility. AI without trust isn’t just limited - it’s destined to fail. 𝐇𝐨𝐰 𝐰𝐨𝐮𝐥𝐝 𝐲𝐨𝐮 𝐫𝐞𝐠𝐚𝐢𝐧 𝐭𝐫𝐮𝐬𝐭 𝐢𝐧 𝐭𝐡𝐢𝐬 𝐬𝐢𝐭𝐮𝐚𝐭𝐢𝐨𝐧? 𝐋𝐞𝐭’𝐬 𝐬𝐡𝐚𝐫𝐞 𝐚𝐧𝐝 𝐢𝐧𝐬𝐩𝐢𝐫𝐞 𝐞𝐚𝐜𝐡 𝐨𝐭𝐡𝐞𝐫 👇 #AI #DataPrivacy #Leadership #CustomerTrust #Ethics

  • View profile for Armand Ruiz
    Armand Ruiz Armand Ruiz is an Influencer

    building AI systems @meta

    206,260 followers

    How To Handle Sensitive Information in your next AI Project It's crucial to handle sensitive user information with care. Whether it's personal data, financial details, or health information, understanding how to protect and manage it is essential to maintain trust and comply with privacy regulations. Here are 5 best practices to follow: 1. Identify and Classify Sensitive Data Start by identifying the types of sensitive data your application handles, such as personally identifiable information (PII), sensitive personal information (SPI), and confidential data. Understand the specific legal requirements and privacy regulations that apply, such as GDPR or the California Consumer Privacy Act. 2. Minimize Data Exposure Only share the necessary information with AI endpoints. For PII, such as names, addresses, or social security numbers, consider redacting this information before making API calls, especially if the data could be linked to sensitive applications, like healthcare or financial services. 3. Avoid Sharing Highly Sensitive Information Never pass sensitive personal information, such as credit card numbers, passwords, or bank account details, through AI endpoints. Instead, use secure, dedicated channels for handling and processing such data to avoid unintended exposure or misuse. 4. Implement Data Anonymization When dealing with confidential information, like health conditions or legal matters, ensure that the data cannot be traced back to an individual. Anonymize the data before using it with AI services to maintain user privacy and comply with legal standards. 5. Regularly Review and Update Privacy Practices Data privacy is a dynamic field with evolving laws and best practices. To ensure continued compliance and protection of user data, regularly review your data handling processes, stay updated on relevant regulations, and adjust your practices as needed. Remember, safeguarding sensitive information is not just about compliance — it's about earning and keeping the trust of your users.

  • View profile for Alexandra Geese

    Member of the European Parliament The Greens/EFA. Digital, Democracy, Equality

    7,185 followers

    Women’s rights don’t look good through Meta's AI Glasses In her new book, Laura Bates paints a grim, yet painfully accurate, picture of how AI technologies are deliberately harming women. Meta’s AI glasses are a striking example.  Tech billionaires have decided to put their money on instruments to sell women and girls to men. It's like a giant Epstein network - where men hold power and women are objects given to them as bait or reward. Reports of women being filmed through these glasses without their consent are piling up. One woman found herself on TikTok after a man approached her at the beach to compliment her on her bikini. Unaware she was being recorded, she revealed private information about her employer and family. In another case reported by the BBC, a women’s phone number was visible, leading to a bombardment of calls and messages from strangers. Such videos are part of a social media “pick-up trend”, where men film women in gyms, airports or beaches with AI glasses and upload the footage online. The comment sections quickly fill with derogatory and hateful comments about the women. The result? Humiliation, fear, and feelings of insecurity in public spaces. Meta points to the small LED light that signals recording. In reality, this is only a fig leaf. Online tutorials show how easily the light can be disabled or covered. And frankly, it's not my job to check whether a man's glasses have a light! Instead of tackling those risks, Meta plans to intensify them. According to TechCrunch, the company is developing a facial recognition feature that would identify people and provide information about them without their knowledge. This makes it easier than ever to access personal data, increasing the risk of stalking and harassment. Meta argues the feature would only recognise public profiles or existing connections. But having a public account or being in a large WhatsApp group does not equal consent to share personal information. Already today Meta AI glasses can be paired with facial recognition tools like PimEyes. Forbes reports that two students tested this and were able to find the name, home address and phone number of strangers instantly. In the EU, secretly recording and widely sharing such videos can violate the GDPR. Real-time facial recognition would likely qualify as a high-risk system under the EU AI Act, requiring strict human oversight and testing before a product is placed on the market. The AI Act also prohibits building face databases by scraping images from the web. This makes Meta’s plans to integrate a facial recognition feature in the glasses highly dubious. Unfortunately, the current motto in Brussels is "deregulation". While getting rid of unnecessary red tape is important, many rules protect people from violence and attacks on their dignity - women in particular. Let's put people first - and resist the calls of tech companies and the US government to strip citizens of their rights.

  • View profile for Alena Funtikova-White, Ph.D

    VP of North Texas ISSA | Mentor | Cybersecurity Advocate | Leader | Lifelong Learner | Educator | Cyber Threat Intelligence Professional

    3,616 followers

    UPDATE: On November 22, the update was added to the article basically saying that Google’s recent wording change around Gmail “smart features” caused major confusion — including early reports suggesting emails were being used to train Google’s AI models by default. After reviewing Google’s documentation, the author of the article concluded that “it doesn’t appear to be the case”. Gmail does scan content for built-in features like spam filtering and suggestions, but that is supposedly separate from training generative AI. 🤔 “… doesn’t appear to be the case” is the operative phrase in that update… Isn’t it? (Link to the updated source is in the comments). 🚨 Heads-up, cyber friends: your inbox might be humming with more than just deadlines. According to Malwarebytes, Gmail is automatically opting you in to have all your emails and attachments used for training its AI models. Unless you manually opt out, your private correspondence may now be fueling AI-features behind the scenes. Here are the key takeaways: 🔍 Opt-in by default matters — Instead of asking you first, the service assumes consent. This flips the script on personal privacy: it’s no longer “do you want to participate?” but “you are participating unless you act.” That shifts the power and — for many — erodes trust. 🤖 Training AI on consumer data without explicit consent is becoming a worrying trend. Using everyday user content (emails, attachments, chats) to refine AI models means personal information is being repurposed in unexpected ways. Even if anonymized, the fact that your private communications become a training set should raise eyebrows. 🛡️ Implications for professionals and individuals alike — If you handle sensitive info (clients, students, research, education), this isn’t just a nuisance; it’s a risk. Consent needs to be real, transparent and meaningful — not buried under settings toggles. 🧠 What you can do: Go into your Gmail settings, turn off “Smart features” in both Gmail/Chat/Meet and Workspace sections. Because yes, you have to flip both. In an era where data is called “the new oil,” assuming people want to pump their private life into AI-refineries without explicit agreement feels deeply off-brand for what privacy should mean. If we’re teaching the next generation how to think, how to work ethically, we can’t give tacit permission to a default that says “we’ll use your stuff unless you speak up.” As someone who lives at the intersection of cybersecurity, teaching, and digital citizenship, I say: We have to call this out. Let’s insist that “Yes” means yes, not “We quietly opted you in; you could opt out if you found it.” Control over personal data isn’t a bonus—it’s fundamental. #WomenInCyber #CyberSecurityLeadership #DataPrivacy #AIethics #ConsentFirst #StopAndSmellTheFlowers #ISSA #CyberThreatIntelligence #TechTrends #DigitalRights

  • View profile for Katharina Koerner

    AI Governance & Security I Trace3 : All Possibilities Live in Technology: Innovating with risk-managed AI: Strategies to Advance Business Goals through AI Governance, Privacy & Security

    44,608 followers

    This new white paper by Stanford Institute for Human-Centered Artificial Intelligence (HAI) titled "Rethinking Privacy in the AI Era" addresses the intersection of data privacy and AI development, highlighting the challenges and proposing solutions for mitigating privacy risks. It outlines the current data protection landscape, including the Fair Information Practice Principles, GDPR, and U.S. state privacy laws, and discusses the distinction and regulatory implications between predictive and generative AI. The paper argues that AI's reliance on extensive data collection presents unique privacy risks at both individual and societal levels, noting that existing laws are inadequate for the emerging challenges posed by AI systems, because they don't fully tackle the shortcomings of the Fair Information Practice Principles (FIPs) framework or concentrate adequately on the comprehensive data governance measures necessary for regulating data used in AI development. According to the paper, FIPs are outdated and not well-suited for modern data and AI complexities, because: - They do not address the power imbalance between data collectors and individuals. - FIPs fail to enforce data minimization and purpose limitation effectively. - The framework places too much responsibility on individuals for privacy management. - Allows for data collection by default, putting the onus on individuals to opt out. - Focuses on procedural rather than substantive protections. - Struggles with the concepts of consent and legitimate interest, complicating privacy management. It emphasizes the need for new regulatory approaches that go beyond current privacy legislation to effectively manage the risks associated with AI-driven data acquisition and processing. The paper suggests three key strategies to mitigate the privacy harms of AI: 1.) Denormalize Data Collection by Default: Shift from opt-out to opt-in data collection models to facilitate true data minimization. This approach emphasizes "privacy by default" and the need for technical standards and infrastructure that enable meaningful consent mechanisms. 2.) Focus on the AI Data Supply Chain: Enhance privacy and data protection by ensuring dataset transparency and accountability throughout the entire lifecycle of data. This includes a call for regulatory frameworks that address data privacy comprehensively across the data supply chain. 3.) Flip the Script on Personal Data Management: Encourage the development of new governance mechanisms and technical infrastructures, such as data intermediaries and data permissioning systems, to automate and support the exercise of individual data rights and preferences. This strategy aims to empower individuals by facilitating easier management and control of their personal data in the context of AI. by Dr. Jennifer King Caroline Meinhardt Link: https://lnkd.in/dniktn3V

  • View profile for Jon Suarez-Davis (jsd)

    Chief Strategy Officer @ Transparent Partners | Investor | Advisor | Digital Transformation Leader | Ex: Salesforce, Krux, Kellogg’s

    18,194 followers

    Google's cookies announcement isn't the week's big news; Oracle's $115 million privacy settlement is. 👇🏼 This week's most important news headline is: "Oracle's $115 million privacy settlement could change industry data collection methods." Every marketer and media leader should understand the allegations in the complaint and execute a review of their data strategy, policies, processes, and protocols, especially as they pertain to third-party data. While we've been talking and fretting about cookie deprecation for four years, we've missed the plot on data permission and usage. It's time to get our priorities straight. Article in the comments section and Industry reaction from legal and data experts below. Jason Barnes, partner at the Simmons Hanly Conroy law firm: "This case is groundbreaking. The allegations in the complaint were that Oracle was building detailed dossiers about consumers with whom it had no first-party relationship. Rather than face a jury, Oracle agreed to a significant monetary settlement and also announced it was getting out of the business," Barnes said. "The big takeaway is that surveillance tech companies that lack a first-party relationship with consumers have a significant problem: no American has actually consented to having their personal information surveilled everywhere they go by a company they've never heard of, packaged into a commoditized dossier, and then monetized and sold without their knowledge." Debbie Reynolds, Founder, Chief Executive Officer, and Chief Data Privacy Officer at Debbie Reynolds Consulting, LLC: "Oracle's privacy case settlement is a significant precedent and highlights that privacy risks are now recognized as business risks, with reduced profits, increased regulatory pressure, and higher consumer expectations impacting organizations' bottom lines," Reynolds said. "One of the most important features of this settlement is Oracle's agreement to stop collecting user-generated information from external URLs and online forms, which is a significant concession in how they do business. Other businesses should take note." #marketing #data #media Ketch super{set}

  • View profile for Francesco Mazzola

    Security Architect & Cyber Risk Leader | GRC‑driven security for global enterprises | Data Protection & AI Risk Governance | CISSP

    7,152 followers

    🧭 The role of the Data Protection Officer (DPO) is undergoing a profound transformation. Once viewed primarily as a compliance steward for the General Data Protection Regulation (#GDPR), the DPO is now emerging as a central #architect of digital governance. This evolution is driven by the convergence of multiple EU regulatory frameworks: namely the #NIS2 Directive, the Digital Operational Resilience Act (#DORA), and the #AIAct, just to name the most relevant, and each introducing new layers of accountability, risk management, data governance and ethical oversight. Together, these instruments form a complex regulatory ecosystem that demands a multidisciplinary approach. The modern DPOs are no longer just legal compliance officers, they now operate at the dynamic crossroads of #law, #cybersecurity, operational #resilience, and AI #ethics. As digital ecosystems grow more complex, the DPO is evolving into a true #DataProtectionEngineer, equipped not only to interpret regulations but to architect privacy-aware systems. 📌This role demands a deep understanding of how emerging technologies such as AI, #IoT, #cloudinfrastructure, which affect the fundamental rights and freedoms of individuals. It’s not just about safeguarding data; it’s about safeguarding dignity, autonomy, and #trust in the digital age. ⚠️ Key Challenges for Organisations As regulatory expectations intensify, organisations face a series of strategic and operational hurdles that underscore the importance of a well-educated and experienced DPO. 1️⃣ Regulatory Fragmentation and Overlap Multiple frameworks introduce overlapping obligations, definitions, and enforcement mechanisms. Without centralised coordination, organisations risk inconsistent compliance and exposure to regulatory sanctions. The DPO serves as the 'central figure' for harmonising these requirements across legal, technical, and operational domains. 2️⃣Accountability and Demonstrable Compliance Supervisory authorities increasingly demand evidence-based compliance. Organisations must maintain detailed records of data flows, AI development processes, and incident responses. The DPO must champion a culture of #accountability, supported by robust governance structures and documentation protocols. 3️⃣ Technical and Organisational Complexity DORA mandates rigorous digital resilience testing and ICT risk assessments. The AI Act imposes strict data quality, explainability, and human oversight requirements. These obligations require cross-functional collaboration and significant investment in infrastructure, training, and tooling. At the end of the day, the DPO must act as a change agent, fostering alignment between compliance, innovation, and business objectives. The challenge is formidable, but so is the opportunity to redefine the role as a cornerstone of ethical, secure, and forward-looking digital governance.

  • View profile for 🍀Apolline Nielsen

    Senior Marketing Manager | B2B Tech | Account Based Marketing | Demand Generation | Growth Marketing | T-Shaped Marketer

    73,595 followers

    I recently talked with a fellow marketer about account scoring in #ABM.  They struggled to adapt to the recent privacy law, which made me think: We must rethink how we do this. Account scoring is evolving. It's moving way beyond simple intent data.  You need a new approach to find these high-potential accounts in a privacy-first world. I call it Account Scoring 2.0. But why? ➖Traditional intent data is becoming less reliable.   ➖Privacy regulations are changing.   ➖Third-party cookies are fading away.   ➖We can't rely on old methods.   ➖We need to be more innovative. 👉🏾 Using Account Scoring 2.0 helps you focus on first-party data. This is data you collect directly from your target accounts like: Website visits. Content downloads.    Engagement within your emails.  Collecting and analyzing this data is valuable. It's also privacy-compliant. 👉🏾 Other things to look out for are behavioral signals.  Look out for target accounts engaging with your content.    Are they attending your webinars?  Are they interacting with your sales team?  These actions show interest and suggest potential. 👉🏾Predictive modeling plays a key role too. You can use AI to analyze first-party and behavioral data.  This helps you predict which accounts are most likely to convert.  It allows you to prioritize your efforts. Remember, it's about working smarter, not harder. 👉🏾 Don't forget contextual data; it matters, too.  What's happening in the market?  Look for industry trends that align with your offerings.  Are there changes in your target accounts' businesses?  Understanding the context helps refine your scoring. Look at Account Scoring 2.0 as a strategy, not just technology. It's more about understanding your ideal customer profile. It's about aligning sales and marketing and building relationships while respecting privacy more efficiently. What are your thoughts on the future of account scoring? Have you used it before? #b2bmarketing #marketingstrategy

  • View profile for Teresa Troester-Falk

    Founder, BlueSky PrivacyStack | Subscribe to The Privacy Tools Trap | Author “So You Got the Privacy Officer Title—Now What?” | Resources for Privacy Teams & Solo Pros | GDPR • CCPA

    7,588 followers

    Most new privacy professionals with fresh CIPP certifications are unprepared for this conversation "We want to track what customers look at on our website and send them targeted emails about those products. That’s fine since they’re already our customers, right?" You know the legal framework. You understand GDPR. You passed your certification. But now you're facing a room of marketing stakeholders who need answers that help them do their jobs. Knowledge tells you: This involves processing personal data for marketing - need to check lawful basis, likely legitimate interests with balance test, plus consider ePrivacy rules for tracking. Judgment asks: Does this specific use case make sense? → What exactly are they tracking? Page views or detailed behavior? → What does “personalization” mean here, recommendations or aggressive targeting? → What did customers expect when signing up? → Can they easily opt out? → Is this helpful to the customer or just to marketing? The legal answer is the same. The practical approach varies completely. This gap isn’t discussed enough in privacy education. We learn the "what" and "why" in certification programs, but day-to-day privacy work is all about the "when" and "how." → When to push back vs. find creative workarounds → How to get buy-in without a budget or authority → When "perfect" compliance isn’t realistic—and what to do instead → How to speak business language while holding privacy lines Many privacy professionals struggle here because we're: → Waiting for perfect info before acting → Speaking only in compliance terms → Afraid to make the wrong call and get blamed But here’s the reality: Judgment comes from experience and imperfect action beats perfect paralysis. The most effective privacy professionals aren’t those who memorize every regulation. They’re the ones who navigate gray areas and keep the business moving. Real examples of knowledge vs. judgment: → The Marketing Automation Dilemma Knowledge: Needs lawful basis, tracking consent, LI balancing test Judgment: Start with product category suggestions, include opt-out, test customer response before expanding → The Vendor Assessment Crisis Knowledge: DPA + security questionnaire needed Judgment: Vendor handles minimal data, go live now with essentials, full review in parallel → The Data Retention Debate Knowledge: Delete data when no longer needed Judgment: Tier retention by sensitivity/business value with review points, not a one-size policy Certifications teach you to spot problems. Experience teaches you to solve them. What’s the biggest gap you’ve faced between privacy theory and real-world practice? P.S. If you’re feeling this tension, you’re right on track. This isn’t a flaw in your education. It’s the start of real expertise. The most effective privacy professionals I know all went through this same shift.

Explore categories