Best Practices In Technology

Explore top LinkedIn content from expert professionals.

  • View profile for Armand Ruiz
    Armand Ruiz Armand Ruiz is an Influencer

    building AI systems @meta

    206,260 followers

    I’ve been practically living in Claude Code lately. If you build with AI daily, you know that your orchestration workflow is just as important as the model itself. Recently, Boris Cherny (the creator of Claude Code at Anthropic) shared the internal best practices and workflows his team actually uses on a daily basis. Someone brilliantly distilled those threads into a structured CLAUDE.md file that you can drop straight into the root of any project. It essentially acts as a system prompt that turns Claude into a much more autonomous, rigorous engineering partner. Here is what the file enforces: 1/ Workflow Orchestration: Mandates a "Plan Node Default" for any task over 3 steps and uses subagents liberally to keep the main context window clean. 2/ The Self-Improvement Loop: This is the real magic. After ANY correction, it updates a tasks/lessons.md file. You are building a compounding system where the mistake rate drops over time because it actively learns from your feedback. 3/ Verification Before Done: It cannot mark a task complete without proving it works (diffing behavior, running tests, checking logs). "Would a staff engineer approve this?" 4/ Autonomous Bug Fixing: Zero hand-holding. You point it at failing CI tests or error logs, and it goes to work without requiring constant context switching from you. 5/ Strict Task Management: Forces a "Plan First" approach written to a todo.md with checkable items before implementation even starts. It forces the AI to prioritize simplicity, find root causes instead of temporary fixes, and minimize the blast radius of its changes. If you are spending hours a day in the terminal with AI, setting up a strong .md instruction file like this will save you a massive amount of time and frustration. It takes time and effort to set this up. If you do, you are ahead of everyone else.

  • View profile for Jyothish Nair

    Doctoral Researcher in AI Strategy & Human-Centred AI | Technical Delivery Manager at Openreach

    19,098 followers

    Reliability, evaluation, and “hallucination anxiety” are where most AI programmes quietly stall. Not because the model is weak. Because the system around it is not built to scale trust. When companies move beyond demos, three hard questions appear: →Can we rely on this output? →Do we know what “good” actually looks like? →How much human oversight is enough? The fix is not better prompting. It is a strategy and operating discipline. 𝐅𝐢𝐫𝐬𝐭: ⁣Define reliability like a product, not a vibe. Every serious AI use case should have a one-page SLO sheet with measurable targets across: →Task success ↳Right-first-time rate and rubric-based acceptance →Factual grounding ↳Evidence coverage and unsupported-claim tracking →Safety and compliance ↳Policy violations and PII leakage →Operational quality ↳Latency, cost per task, escalation to humans Now “good” is no longer opinion. It is observable. 𝐒𝐞𝐜𝐨𝐧𝐝:  evaluation must be continuous, not a one-off demo test. Use a simple loop: 𝐏lan: Define rubrics, datasets, and risk tiers 𝐃⁣o: Run offline evaluations and limited pilots 𝐂heck: Monitor drift and regressions weekly 𝐀ct: Update prompts, data, guardrails, and workflows Support this with an AI test pyramid: →Unit checks for prompts and tool behaviour →Scenario tests for real edge failures →Regression benchmarks to prevent backsliding →Live monitoring in production Add statistical control charts, and you can detect silent degradation before users do. 𝐓𝐡𝐢𝐫𝐝: reduce hallucinations by design. →Run a short failure-mode workshop and engineer controls: →Require retrieval or evidence before answering →Allow safe abstention instead of confident guessing →Add claim checking and tool validation →Use structured intake and clarifying flows You are not asking the model to behave. You are designing a system that expects failure and contains it. 𝐅𝐨𝐮𝐫𝐭𝐡: make human-in-the-loop affordable. Tier risk: →Low risk: Light sampling →Medium risk: Triggered review →High risk: Mandatory approval Escalate only when signals demand it: low confidence, missing evidence, policy flags, or novelty spikes. Review becomes targeted, fast, and a source of improvement data. 𝐅𝐢𝐧𝐚𝐥𝐥𝐲: Operate it like a capability. Track outcomes, risk, delivery speed, and cost on a single dashboard. Hold a short weekly reliability stand-up focused on regressions, failure modes, and ownership. What you end up with is simple: ↳Use case catalogue with risk tiers ↳Clear SLOs and error budgets ↳Continuous evaluation harness ↳Built-in controls ↳Targeted human review ↳Reliability cadence AI does not scale on intelligence alone. It scales on measurable trust. ♻️ Share if you found thisuseful. ➕ Follow (Jyothish Nair) for reflections on AI, change, and human-centred AI #AI #AIReliability #TrustAtScale #OperationalExcellence

  • View profile for Santanu Das

    Electrical Engineering Advance Diploma in fire Engineering and Safety operation Diploma in Fire Safety Engineering NEBOSH IGC

    40,533 followers

    ⚙️ Machinery Safety & Preventive Maintenance 🛠️🔧 Every machine part has a finite lifespan, and neglecting proper maintenance can lead to unexpected failures, jeopardizing both operations and safety! 🚨 Hydraulic system malfunctions can be particularly hazardous, especially under load, where even a near miss can quickly escalate into a catastrophic incident. ✅ Key Safety Practices to Prevent Machinery Failures 🔍📋 1️⃣ Perform Daily Inspections 🔄✅ 🔹 Use a structured checklist 📋 to ensure all components are in optimal working condition. 🔹 Check for loose bolts, leaks, abnormal noises, or signs of wear. 🔹 Address minor issues immediately to prevent major failures! 🚨 2️⃣ Know Your Machine’s Part Lifespan ⏳📖 🔹 Refer to manufacturer manuals 📚 to determine the expected lifespan of critical components. 🔹 Replace moving parts, hydraulic hoses, and seals before they fail! 🚧 🔹 Use only approved replacement parts to maintain safety and efficiency. 3️⃣ Work Safely Around Machinery 🦺🚫 🔹 Never stand under suspended loads or moving parts. One failure can be fatal! ⚠️ 🔹 Use mechanical blocks and safety stands to prevent accidental movements. 🔹 Always implement LOTO (Lockout/Tagout) 🔒 before maintenance to eliminate unexpected energy releases! 4️⃣ Conduct Risk Assessments & Train Workers 📊👷♂️ 🔹 Engage machine operators & maintenance teams in hazard identification. 🚧 🔹 Use real-life case studies to highlight potential dangers & mitigation strategies. 🔹 Implement training programs to reinforce best practices in safe machine handling & emergency response. 💡 Remember: A well-maintained machine is a safe machine! ✅ Regular preventive maintenance not only extends the life of machinery but also safeguards workers from life-threatening incidents! 🛡️ Safety first, always! 👷♂️🔧🔥

  • View profile for Jean Gan

    Head of Legal & Compliance (APAC) | AI Governance & Accountable AI | PhD Researcher (Law & AI) | Founder, Global Legal AI, AIgnite Women

    23,517 followers

    I’ve had a lot of people reach out recently asking for tips and guidance on AI governance 🤝 That tells me one thing. Many organisations are still at a very early stage, trying to work out what AI governance actually looks like in practice. Over the past year, I’ve shared a number of playbooks on AI, law, and governance. The feedback has been overwhelmingly positive, and I’m truly humbled by it 💬 It confirmed that practical, grounded guidance is what people are looking for. So I put together a 𝐂𝐨𝐫𝐩𝐨𝐫𝐚𝐭𝐞 𝐀𝐈 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 𝐏𝐥𝐚𝐲𝐛𝐨𝐨𝐤📘 This one goes deeper and focuses on the questions I’m being asked most often. It covers: • Why AI governance fails in real organisations • What AI governance actually means at corporate and board level • A practical operating model across board, executive, legal, business, and IT 🧭 • The AI risk landscape boards actually care about, beyond bias alone • Accountability, approval, and escalation, where most governance breaks down ⚠️ • Third-party and vendor AI risk, and why accountability cannot be outsourced • What regulators and boards will expect next • Practical next steps that work inside existing governance structures It’s not meant to be technical nor theoretical, which in my opinion, is not going to be helpful. It is meant to be feasible and useful in reality. 💡 Also, it’s written for GCs, risk and compliance leaders, and senior management responsible for governing AI in real organisations. And, I know many are struggling in this area. If this is relevant to you: 👉 Make sure we’re connected on LinkedIn 👉 Comment 𝐀𝐈 𝐆𝐎𝐕𝐄𝐑𝐍𝐀𝐍𝐂𝐄 below and I’ll send it to you 📩 Once you’ve read it, let me know what stage your organisation is at.

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    716,661 followers

    A sluggish API isn't just a technical hiccup – it's the difference between retaining and losing users to competitors. Let me share some battle-tested strategies that have helped many  achieve 10x performance improvements: 1. 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝘁 𝗖𝗮𝗰𝗵𝗶𝗻𝗴 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝘆 Not just any caching – but strategic implementation. Think Redis or Memcached for frequently accessed data. The key is identifying what to cache and for how long. We've seen response times drop from seconds to milliseconds by implementing smart cache invalidation patterns and cache-aside strategies. 2. 𝗦𝗺𝗮𝗿𝘁 𝗣𝗮𝗴𝗶𝗻𝗮𝘁𝗶𝗼𝗻 𝗜𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 Large datasets need careful handling. Whether you're using cursor-based or offset pagination, the secret lies in optimizing page sizes and implementing infinite scroll efficiently. Pro tip: Always include total count and metadata in your pagination response for better frontend handling. 3. 𝗝𝗦𝗢𝗡 𝗦𝗲𝗿𝗶𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 This is often overlooked, but crucial. Using efficient serializers (like MessagePack or Protocol Buffers as alternatives), removing unnecessary fields, and implementing partial response patterns can significantly reduce payload size. I've seen API response sizes shrink by 60% through careful serialization optimization. 4. 𝗧𝗵𝗲 𝗡+𝟭 𝗤𝘂𝗲𝗿𝘆 𝗞𝗶𝗹𝗹𝗲𝗿 This is the silent performance killer in many APIs. Using eager loading, implementing GraphQL for flexible data fetching, or utilizing batch loading techniques (like DataLoader pattern) can transform your API's database interaction patterns. 5. 𝗖𝗼𝗺𝗽𝗿𝗲𝘀𝘀𝗶𝗼𝗻 𝗧𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲𝘀 GZIP or Brotli compression isn't just about smaller payloads – it's about finding the right balance between CPU usage and transfer size. Modern compression algorithms can reduce payload size by up to 70% with minimal CPU overhead. 6. 𝗖𝗼𝗻𝗻𝗲𝗰𝘁𝗶𝗼𝗻 𝗣𝗼𝗼𝗹 A well-configured connection pool is your API's best friend. Whether it's database connections or HTTP clients, maintaining an optimal pool size based on your infrastructure capabilities can prevent connection bottlenecks and reduce latency spikes. 7. 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝘁 𝗟𝗼𝗮𝗱 𝗗𝗶𝘀𝘁𝗿𝗶𝗯𝘂𝘁𝗶𝗼𝗻 Beyond simple round-robin – implement adaptive load balancing that considers server health, current load, and geographical proximity. Tools like Kubernetes horizontal pod autoscaling can help automatically adjust resources based on real-time demand. In my experience, implementing these techniques reduces average response times from 800ms to under 100ms and helps handle 10x more traffic with the same infrastructure. Which of these techniques made the most significant impact on your API optimization journey?

  • View profile for Bobby Guelich

    Co-Founder and CEO at Elion

    9,834 followers

    We’ve just released the definitive guide to the AI-driven prior auth space. Prior auth is one of those areas that’s both ripe for AI and deeply frustrating for everyone involved: • The hardest parts of the process are exactly what AI is good at • If done well, it can drive real ROI for health systems and providers • And yet, it remains painful, slow, and opaque The reality is that prior auth is incredibly complex and fragmented. Requirements vary dramatically by payer, plan, procedure, state, and site of service. Submission rules are often difficult to even find, let alone interpret. And once a request is submitted, turnaround times are long while visibility into status is limited at best. It's no surprise then that dozens of vendors have jumped in to try to fix it. The problem? It’s become almost impossible to make sense of the landscape. Who does what? Which pain points do they really address? How do they actually solve the issues? That’s what this guide is for. We explain how the prior auth process works across both medical and pharmacy benefit. We highlight where AI is being applied today. And we map the vendor landscape so you can see how the pieces fit together. This is the resource I’ve wanted for a long time to understand the space. Link is in the comments, and we'd love to hear your thoughts. Special shoutout to Colin DuRant for his incredible work putting this together 🙌

  • View profile for Jen Easterly

    CEO, RSAC | Leader | Speaker | Advisor | Optimist | #MoveFast&BuildThings

    124,341 followers

    Any MFA is better than no MFA, but recent attacks make it clear: legacy MFA is no match for modern threats. Happy Cyberz Saturday! Check out this piece from my teammates Bob Lord & Grant Dasher on USDA’s FIDO implementation. BLUF: USDA’s success story should inspire all enterprises to migrate to FIDO authentication. Customers expect their providers to take security seriously, and given today’s threat landscape, organizations must ensure they are mitigating one of the most common and effective attack vectors.👇 As the saying goes, malicious actors don’t break in—they log in. There's a significant truth in that statement. Today, many organizations struggle to protect their staff from credential phishing, a challenge that's only grown as attackers increasingly execute “MFA bypass” attacks. In an MFA bypass attack, threat actors use social engineering techniques to trick victims into providing their username and password on a fake website. If victims are using “legacy MFA” (such as SMS, authenticator apps, or push notifications), the attackers simply request the MFA code or trigger the push notification. If they can convince someone to reveal two pieces of information (username and password), they can likely manipulate them into sharing three (username, password, and MFA code or action). Make no mistake—any form of MFA is better than no MFA. But recent attacks make it clear: legacy MFA is no match for modern threats. So, what can organizations do? Sometimes a case study can answer that question. Today, CISA and the USDA are releasing a case study that details the USDA’s deployment of FIDO capabilities to approximately 40,000 staff. While most of their staff have been issued government-standard Personal Identity Verification (PIV) smartcards, this technology is not suitable for all employees, such as seasonal staff or those working in specialized lab environments where decontamination procedures could damage standard PIV cards. This case study outlines the challenges the USDA faced, how they built their identity system, and their recommendations to other enterprises. Our personal favorite recommendation: "Always be piloting". FIDO authentication addresses MFA-bypass attacks by using modern cryptographic techniques built into the operating systems, phones, and browsers we already use. Single sign-on (SSO) providers and popular websites also support FIDO authentication. Here’s the remarkable part about FIDO: even if malicious actors craft a convincing scheme to steal staff credentials, and the staff comply, the attackers still won’t be able to compromise the account. The USDA’s success story should inspire all enterprises to migrate to FIDO authentication. Read the full case study here: https://lnkd.in/eGM2RZmz.

  • View profile for Greg Coquillo
    Greg Coquillo Greg Coquillo is an Influencer

    Product Leader @AWS | Startup Investor | 2X Linkedin Top Voice for AI, Data Science, Tech, and Innovation | Quantum Computing & Web 3.0 | I build software that scales AI/ML Network infrastructure

    227,482 followers

    Are you using Claude to autocomplete or to think in parallel with you? Many developers treat it like a faster tab key. The real power shows up when you use it as a second brain running alongside yours. Here’s what that looks like in practice. 1. Run Work in Parallel Spin up multiple sessions and worktrees so planning, refactoring, reviewing, and debugging happen simultaneously instead of sequentially. 2. Start Complex Tasks in Plan Mode Outline architecture and approach before writing code, so execution becomes clean and intentional instead of reactive. 3. Maintain a Living CLAUDE.md Document mistakes, patterns, and guardrails so Claude improves with your workflow and reduces repeated errors over time. 4. Turn Repetition into Skills Automate recurring tasks with reusable commands and structured prompts so you build once and reuse everywhere. 5. Delegate Debugging Provide logs, failing tests, or CI output and let Claude iterate toward solutions while you focus on higher-level thinking. 6. Challenge the Output Ask for edge cases, diff comparisons, cleaner abstractions, and alternative designs to push beyond “good enough.” 7. Optimize Your Environment Set up your terminal, tabs, and context structure so you reduce friction and maximize visibility while working. 8. Use Subagents for Heavy Lifting Offload complex or exploratory tasks to parallel agents so your main context stays clean and focused. 9. Query Data Directly Use Claude to interact with databases, metrics, and analytics tools so you reason about data instead of manually extracting it. 10. Turn It Into a Learning Engine Ask for diagrams, system explanations, and critique so every project improves your mental models. The difference is simple: Autocomplete makes you faster. Parallel thinking makes you better. The question is how you’re choosing to use it.

  • View profile for Jeff Winter
    Jeff Winter Jeff Winter is an Influencer

    Industry 4.0 & Digital Transformation Enthusiast | Business Strategist | Avid Storyteller | Tech Geek | Public Speaker

    172,149 followers

    Every few years, something slams the brakes on business-as-usual. Then hits the accelerator. COVID did it. It turned five-year digital roadmaps into five-week survival plans. Then came the supply chain fallout. Every weakness in visibility, data, and flexibility was suddenly front-page news. Those events triggered a tidal wave of tech investment. Automation. Cloud. AI. MES. Data platforms. Progress born out of panic. And that’s the heart of 𝐌𝐚𝐫𝐭𝐞𝐜’𝐬 𝐋𝐚𝐰. Technology moves at an exponential rate. Organizations evolve at a logarithmic one. That gap keeps growing until something big forces a reset. Not because companies want to change, but because they have no choice. The next big disruption is already loading. It might be AI regulation, sustainability mandates, or the collapse of old operating models under the weight of new data expectations. 𝐈𝐟 𝐲𝐨𝐮 𝐜𝐨𝐮𝐥𝐝 𝐝𝐨 𝐨𝐧𝐞 𝐭𝐡𝐢𝐧𝐠 𝐝𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐭𝐥𝐲 𝐛𝐞𝐟𝐨𝐫𝐞 𝐢𝐭 𝐡𝐢𝐭𝐬, 𝐡𝐞𝐫𝐞’𝐬 𝐦𝐲 𝐚𝐝𝐯𝐢𝐜𝐞: Stop building technology roadmaps in a vacuum. Start building organizational readiness. That means investing in leadership alignment, communication cadence, employee training, and data literacy. It means mapping decision-making speed, defining clear ownership of digital initiatives, and stress-testing how fast your teams can pivot when priorities shift. Because when the next shock arrives, your tools won’t save you. Your ability to respond with clarity and confidence will. 𝐒𝐨𝐮𝐫𝐜𝐞: https://lnkd.in/eP8bRaK4 ******************************************* • Visit www.jeffwinterinsights.com for access to all my content and to stay current on Industry 4.0 and other cool tech trends • Ring the 🔔 for notifications!

  • View profile for Dr. Brindha Jeyaraman

    Founder & CEO, Aethryx | Fractional Leader in Enterprise AI Engineering, Ops & Governance | Doctorate in Temporal Knowledge Graphs | Architecting Production-Grade AI | Ex-Google, MAS, A*STAR | Top 50 Asia Women in Tech

    18,397 followers

    (Part 4 of my series: The Boardroom Guide to AI-Ready Data Strategy) For years, organisations debated Data Lakes vs. Data Warehouses. But today, that debate is irrelevant. 1. Infrastructure has become a commodity. 2. Compute is cheap. 3. Storage is cheap. 4. Pipelines are automated. The real bottleneck to scaling AI isn’t technology. It’s meaning. If Marketing, Finance, Risk, and Product all define foundational terms differently , “Customer”, “Revenue”, “Churn”, “Exposure”, your AI systems will fail instantly. They will generate plausible-sounding nonsense based on conflicting definitions. This is why modern AI-driven organisations are shifting from infrastructure debates to semantic alignment. The 3 Architecture Priorities for AI-Ready Enterprises 1️⃣ Decouple Compute & Storage So you can scale elastically, control costs, and avoid vendor lock-in. 2️⃣ Build a Semantic Layer A unified business logic layer sitting above your physical data. It defines metrics, joins, relationships, and meaning — consistently across the enterprise. This becomes the “Rosetta Stone” for your LLMs and Agentic AI systems. 3️⃣ Move to Data Products Instead of fragile pipelines, build domain-owned, SLA-backed, well-documented data products. This accelerates cross-team adoption and eliminates ambiguity. You don’t fail at AI because your model is weak. You fail because your definitions are weak. If your organisation wants reliable GenAI, RAG, and autonomous agents, your first investment is not GPUs, it is the Semantic Layer. Don’t just modernise your stack. Modernise your logic. #DataArchitecture #SemanticLayer #DataProducts #DataMesh #AIStrategy #EnterpriseArchitecture #GenAI #ModernDataStack

Explore categories