Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Message

Be Fair

"Be fair to the company, be fair to the employee, be fair to the client and be fair to the environment"

Your list of Cons below pinpoint the challenge of interpreting what I feel is fair, that is in the context of different human beliefs. Please create:

  1. Define "fair to the company", "fair to the employee", "fair to the client" and "fair to the environment. You provided great examples for company and client.
  2. Pair company, employee, client and environment with a qualifier.
  3. Give at least 3 scenario-based examples for each of company, employee, client and environment. Subjectivity. What one person deems “fair” another may see as too lenient or too strict. Lack of specificity. Without guardrails or examples, “fair” can mean “whatever feels right,” which risks inconsistency. Cultural variance. Different backgrounds bring different notions of fairness (e.g., absolute equality vs. equity based on need).

Got it, Jesse—let’s turn “fair” into something concrete. We’ll:

  1. Define each pillar so everyone"Be fair to the company, be fair to the employee, be fair to the client and be fair to the environment"

Your list of Cons below pinpoint the challenge of interpreting what I feel is fair, that is in the context of different human beliefs. Please create:

  1. Define "fair to the company", "fair to the employee", "fair to the client" and "fair to the environment. You provided great examples for company and client.
  2. Pair company, employee, client and environment with a qualifier.
  3. Give at least 3 scenario-based examples for each of company, employee, client and environment. Subjectivity. What one person deems “fair” another may see as too lenient or too strict. Lack of specificity. Without guardrails or examples, “fair” can mean “whatever feels right,” which risks inconsistency. Cultural variance. Different backgrounds bring different notions of fairness (e.g., absolute equality vs. equity based on need).

Got it, Jesse—let’s turn “fair” into something concrete. We’ll:

  1. Define each pillar so everyone knows exactly what “fair” means.
  2. Pair each with a qualifier to add clarity.
  3. Give 3+ real‑world scenarios for each, showing how to apply it when there’s no policy/SOP.

Definitions

Fair to the Company Making decisions that safeguard Deltra’s long‑term health—financially, reputationally, and strategically. Fair to the Employee Treating teammates with respect for their time, growth, and well‑being—ensuring no one is set up to burn out or feel undervalued. Fair to the Client Delivering transparent, honest service—setting clear expectations, communicating openly, and honoring commitments. Fair to the Environment Minimizing negative impact on our planet—choosing sustainable practices, reducing waste, and conserving resources.


Qualified Pairings

PillarQualifierResulting Phrase
CompanyFinancially Responsible“Be financially responsible to the company”
EmployeeSupportively Equitable“Be supportively equitable to the employee”
ClientTransparently Honest“Be transparently honest to the client”
EnvironmentSustainably Conscious“Be sustainably conscious to the environment”

By spelling out what “fair” really means, adding a qualifier to each, and walking through scenarios, you turn CECE from a fuzzy ideal into clear, repeatable guidance that everyone can apply consistently. knows exactly what “fair” means. 2. Pair each with a qualifier to add clarity. 3. Give 3+ real‑world scenarios for each, showing how to apply it when there’s no policy/SOP.


Scenario-Based Examples

Company (Financially Responsible)

  1. Vendor Selection Two hardware vendors bid. Vendor A is 5% cheaper now but has a shaky track record. Vendor B costs more but offers long‑term warranty and support.

    • Decision: Choose Vendor B for lower total cost of ownership—fair to company’s budget and uptime.
  2. Budget Reallocation Mid‑year you need to cut costs. Marketing suggests trimming IT training; IT suggests cutting new software trials.

    • Decision: Protect core training (avoiding skill gaps) and postpone non‑critical trials—balances risk and ROI.
  3. Project Scope Change A client asks for an unbudgeted feature on an urgent timeline.

    • Decision: Analyze additional hours vs. profitability. If margin falls below threshold, negotiate a change‑order fee—preserves company health.

Employee (Supportively Equitable)

  1. Workload Overflow A teammate is juggling three high‑priority tickets in one day. No policy says you can’t reassign.

    • Decision: Offload the least urgent item to another team member—fair share of work.
  2. Training Requests An employee wants a pricey certification. Budget is limited.

    Decision: Offer partial sponsorship plus time off to study. Or find a lower‑cost alternative—supports growth without breaking bank.

  3. Shift Coverage A technician must cover weekend on‑call. Another volunteer doesn’t mind—but it leaves someone else unloaded.

    Decision: Rotate weekend assignments equally over the quarter—ensures no one’s always on call.

Client (Transparently Honest)

  1. Missed Deadline You discover a delay on deliverables. No SLA covers this exact scenario.

    Decision: Tell the client immediately, explain why, propose a new ETA—builds trust.

  2. Scope Creep A client keeps adding small tasks beyond the original quote.

    Decision: Present an itemized list of extras and request approval or a revised quote—clear boundaries.

  3. Quality Trade‑Off A critical bug arises days before go‑live. You can ship on time with a patch later, or delay launch for full fix.

    Decision: Discuss risks with the client and let them choose—gives them agency and transparency.

Environment (Sustainably Conscious)

  1. Server Provisioning A project needs extra VMs for testing. Spinning up 10 for a week vs. reusing existing ones…

    Decision: Reuse or time‑box new VMs and shut them off when unused—cuts energy usage.

  2. Office Supplies You’re ordering printer cartridges and binders.

    Decision: Choose remanufactured cartridges and recycled‑paper binders—reduces landfill waste.

  3. E‑waste Disposal Old laptops need replacement. No internal policy on disposal.

    Decision: Partner with an e‑waste recycler that certifies safe data destruction—lowers environmental harm.

Vision and Mission Statement

Introduction

Vision

GOOGLE STUDIO AI "To be South Texas's most trusted and enduring technology partner for community banks and credit unions, fostering a legacy of security, innovation, and opportunity for generations to come."

DEEPSEEK AI "To be the trusted leader in IT managed services for community banks and credit unions in South Texas, ensuring long-term stability for our employees and unwavering reliability for our clients—today and for generations to come."

CHATGPT AI “To be the enduring partner of choice for community banks and credit unions throughout South Texas—empowering our clients and team members with innovative, reliable IT solutions and a culture rooted in fairness, long after our founder’s tenure.”

PERPLEXITY AI To be the most trusted and innovative provider of IT solutions and services for community banks and credit unions in South Texas, empowering our clients, employees, and communities to thrive in a secure and technologically advanced environment, today and for generations to come.

Mission

"Deltra Systems LLC delivers premier managed IT services and products to community banks and credit unions across South Texas. Guided by our unwavering core values of fairness to our clients, employees, company, and environment, we are committed to providing reliable infrastructure and fostering the long-term success of the communities we serve."

"Deltra Systems, LLC empowers community banks and credit unions with innovative, secure, and scalable IT solutions. Guided by fairness—to our customers, employees, company, and environment—we build lasting partnerships through exceptional service, employee growth, and sustainable success. Our legacy is a business that thrives beyond its founders, providing enduring value to those we serve."

“At Deltra Systems LLC, we deliver proactive, compliance‑driven IT services—spanning cybersecurity, infrastructure management, and strategic consulting—tailored to the unique needs of community banks and credit unions in South Texas. We champion a people‑first culture that’s fair to our clients, our team, our company, and our environment, ensuring every solution is secure, efficient, and built to last.”

Deltra Systems, LLC is dedicated to delivering reliable, forward-thinking IT products and services with integrity and excellence. We strive to: Serve our clients with unwavering commitment, ensuring their success and security through managed IT solutions tailored to their needs; Foster a workplace where employees thrive professionally and personally, supported by meaningful benefits and opportunities for growth; Operate with fairness and respect for our clients, employees, the company, and the environment, upholding the core values upon which we were founded.

Values & Principles

Policies

Standard Operating Procedures

Values and Principles

Governance

Policies

Vulnerability Management

VM Lifecycle

The Qualys VMDR (Vulnerability Management, Detection, and Response) lifecycle is a continuous, seamlessly orchestrated workflow of automated asset discovery, vulnerability management, threat prioritization, and remediation. By adopting the VMDR lifecycle, organizations decrease their risk of compromise by effectively preventing breaches and quickly responding to threats. Benefits of using Qualys VMDR include:

  • Reduced time to remediate (TTR)
  • Full visibility and control
  • Reduced risk
  • Lower TCO and higher productivity

The VMDR

Qualys VMDR (Vulnerability Management, Detection, and Response) is a cloud-based platform that provides a comprehensive solution for identifying, assessing, prioritizing, and remediating vulnerabilities across an organization's IT environment. It helps organizations manage their cybersecurity risk by automating the vulnerability management lifecycle, from discovery to remediation.

Key Features and Capabilities: Asset Discovery and Inventory: VMDR automatically identifies and inventories all IT assets, including those in cloud environments, on-premises, and mobile devices. Vulnerability Assessment: It scans assets for vulnerabilities, misconfigurations, and other security weaknesses, using a variety of sensors (physical, virtual, cloud, and agents). Prioritization and Risk Analysis: VMDR prioritizes vulnerabilities based on their potential impact and likelihood of exploitation, using a risk-based approach. Remediation: It automates the remediation process by integrating with other systems like ServiceNow, enabling automated patching, configuration changes, and other mitigation actions. Real-time Visibility: Provides a centralized dashboard with real-time insights into the security posture of the organization, enabling proactive threat management. IT Service Management (ITSM) Integration: Integrates with ITSM tools to streamline the vulnerability management workflow and improve collaboration between IT and security teams. Benefits of using Qualys VMDR: Reduced Security Risk: By automating the vulnerability management lifecycle, VMDR helps organizations reduce their overall security risk. Improved Efficiency: Automation and integrations with other systems streamline the vulnerability management process, improving efficiency and reducing manual effort. Cost Savings: By prioritizing vulnerabilities and automating remediation, VMDR helps organizations save time and resources. Enhanced Compliance: VMDR helps organizations meet compliance requirements by providing detailed risk assessments and remediation reports.

Comprehensive Vulnerability Management

Fixing vs Mitigating Vulnerabilities

Version 1

Fixing vs. Mitigating Vulnerabilities In cybersecurity, both fixing (also called remediation) and mitigating vulnerabilities are crucial for securing systems, but they represent different approaches to risk management. Fixing (Remediation) Goal: Completely eliminate the vulnerability by addressing the root cause. Approach: Correcting or removing the weakness at its source, making it impossible to exploit. Examples: Applying software patches, changing configurations, removing vulnerable components, or rewriting insecure code. Permanence: Offers a permanent, long-term solution. Timing: Typically involves a more structured process and may take more time and resources. Effectiveness: Highly effective as it removes the threat entirely. Mitigating Goal: Reduce the likelihood or impact of a vulnerability being exploited, rather than eliminating it entirely. Approach: Implementing temporary or compensating controls to minimize risk while a permanent fix is unavailable or pending. Examples: Deploying firewalls, isolating vulnerable systems, enforcing access controls, using intrusion detection systems, or implementing multi-factor authentication. Permanence: Temporary or partial measures that may not address the underlying issue. Timing: Provides immediate protection and can be implemented faster than remediation in some cases. Effectiveness: Reduces risk but does not eliminate the vulnerability. When to Use Each Organizations often use both strategies in a complementary way. Remediation is preferred for critical vulnerabilities that pose significant risks and where permanent fixes are available without causing excessive disruption. Mitigation is used as an interim solution when immediate remediation isn't feasible, or for less critical vulnerabilities where the cost of remediation outweighs the potential risk. Mitigation can also act as a safety net while remediation is underway. Key Differences at a Glance Factor Vulnerability Mitigation Vulnerability Remediation Timing Proactive (prevents future risks) Reactive (addresses existing issues) Examples Access controls, encryption, segmentation Patching, configuration changes Cost Typically lower Potentially higher Goal Minimize risk and impact Eliminate the vulnerability entirely Risk Exposure Reduced but potentially persistent Vulnerability eliminated By combining both strategies, organizations can achieve a more robust cybersecurity posture, addressing both immediate threats and long-term security goals.

Version 2

Remediation vs. Mitigation: Two Sides of the Same Coin Remediation and mitigation are two different methods for dealing with gaps in an organization’s security posture. Remediation fixes the problem at the source, through your supplier’s own security controls. Mitigation involves using your internal security controls to compensate for any gap that the vendor is unable or unwilling to fix.

Threat intelligence is an important part of both approaches. Security teams use it to access external data feeds with precise information about specific attack vectors and the intentions of malicious actors. This helps focus your remediation and mitigation processes and prioritize the highest-risk security gaps.

What is Remediation? Vulnerability remediation refers to the process of identifying gaps in a vendor’s security controls, prioritizing them to be fixed, and ensuring that they are addressed. For example, you employ a vendor to deliver office supplies to the workplace, but this creates a risk of unauthorized access to your premises. You remediate the vulnerability by requiring the vendor’s employees to sign in at the front desk and wear a visitor’s badge upon arrival.

Remediation can often be the fastest way to deal with vulnerabilities. Suppose your organization has gone through the painstaking process of selecting a vendor, only to discover that the preferred vendor has several gaps in their security controls. Instead of starting over to hunt for a different vendor, you can work together on a remediation plan to achieve the desired security level.

But not all vulnerabilities and risks can be fixed. For example, there may not be a readily available software patch that fixes a given cyber vulnerability, or it may take time until the software can be updated. Sometimes, you have to accept the risk of leaving these vulnerabilities, because the vendor cannot fix them. That’s when you’d turn to mitigation.

4 Steps to Remediation Remediation is considered to be more proactive than mitigation when it comes to vulnerability management, because it aims to permanently resolve the problem at its source instead of minimizing its impact. It achieves this through four basic steps:

Find. Finding vulnerabilities at scale is best done through a vulnerability management solution or penetration testing exercise. Prioritize. Determining which vulnerabilities present a real and present security risk, and which are low priority or do not need to be addressed. Fix. Implementing patches, updating software, or blocking vulnerabilities to mitigate risk. Monitor. Utilizing automated tools that deliver real-time alerts and notifications about vulnerabilities, because remediation is an ongoing process. What is Mitigation? Unlike remediation, mitigation is the process of dealing with risk or vulnerabilities after the fact. It usually involves setting controls around a supplier, so that your organization can defend against those vulnerabilities internally.

Let’s take a company that has calculated that the inherent risk minus control effectiveness for a supplier equals a residual risk of 3 out of 5, which is not satisfactory. Mitigation helps them reduce that risk further, through internal controls that help protect them against the risk.

For example, a company might decide that a supplier presents too large a residual risk, but it wants to start doing business with it. The company elects to mitigate the risk by limiting data shared with the vendor, so it shares only 5,000 consumer records instead of 10,000, until the vendor puts more effective privacy controls in place.

Let’s take the example from above of vendor employees coming on site. Once the vendor’s employee is required to wear a security badge and sign up at the front desk, your organization can decide on mitigation tactics, like giving them limited access privileges. That means that an employee of the organization may need to escort them into the building or department, and this vendor would have limited access to the organization’s files and information.

What Are the Different Mitigation and Remediation Techniques? Mitigation is often used as a way for an organization to buy time before a software update or patch is developed. This is particularly true for consumer-facing applications that need to avoid downtime. One common mitigation technique is Distributed Denial of Service (DDoS) mitigation. This technique helps route suspicious traffic to a centralized location, where it’s filtered to prevent service disruption.

The remediation process is more specific, depending on the type, scope and depth of the threat. Penetration testing is a common remediation technique that enables you to spot gaps and attacks, and address them as they occur. It helps you identify potential attack vectors that malicious threat actors can use to gain control of your network or system. It also analyzes attack patterns to help uncover ongoing attacks, or detect an advanced persistent threat to your network.

Bridging Remediation and Mitigation for Effective Security Both remediation and mitigation have their place in a comprehensive vulnerability management strategy. They complement each other to ensure that external and internal security controls are robust and responsive.

While remediation works by directly fixing security gaps and other risks at the source, so that they are completely eliminated, mitigation reduces the impact of any risks that you can’t totally fix or that might go unnoticed. When you use both tactics together, mitigation serves as a safety net for anything that can’t be remediated.

Balancing the two strategies can lead to a more resilient security posture that adapts to evolving threats and aligns with an organization’s risk tolerance. Let’s explore how to decide which approach your organization should use in a given situation.

Fixing vs Remediating Vulnerabilities

In the context of security and problem-solving, fixing typically refers to addressing a specific, immediate issue or vulnerability, often involving a direct solution like a patch or configuration change. Remediation, on the other hand, is a broader and more comprehensive process that aims to address the root cause of a problem to prevent future occurrences. Remediation may involve fixing, but also includes other actions like mitigation, workarounds, or even accepting residual risk. Here's a more detailed breakdown: Fixing: Focus: Directly addressing a known vulnerability or issue. Examples: Applying a security patch, fixing a bug in code, or correcting a configuration error. Goal: To quickly eliminate the immediate problem and restore functionality or security. Remediation: Focus: Addressing the underlying causes of a problem to prevent future occurrences. Examples: Implementing new security policies, improving system architecture, or redesigning a process. Goal: To achieve a more robust and resilient system by addressing systemic weaknesses. Key Differences: Scope: Fixing is usually limited to the immediate problem, while remediation addresses the broader context and potential for future issues. Depth: Fixing is a more shallow approach, while remediation delves deeper to identify and address the root cause. Duration: Fixing is typically a shorter-term solution, while remediation can involve longer-term strategies and ongoing maintenance. Example: Imagine a software application has a vulnerability that allows unauthorized access. Fixing: . Applying a security patch to the application would be considered fixing the immediate vulnerability. Remediation: . Implementing a new security policy that requires more robust authentication, or redesigning the application to eliminate the vulnerability altogether, would be considered remediation.

Four Stages of Endpoint Protection

Based on the provided information, the four stages of the endpoint protection process can be summarized as follows: Prevention: This initial stage focuses on proactively stopping threats before they can compromise endpoints. This involves a multi-layered approach, including: Deploying robust endpoint protection solutions like antivirus software, firewalls, and intrusion prevention systems. Regularly updating software and applying patches to address vulnerabilities. Implementing encryption for data protection on endpoints. Enforcing strong access control mechanisms and user authentication (e.g., multi-factor authentication). Educating employees about cybersecurity best practices and avoiding risks like phishing attacks. Detection: Even with strong prevention, some threats may still penetrate defenses. The detection stage is about identifying these threats quickly to minimize potential damage. Key elements include: Implementing Endpoint Detection and Response (EDR) solutions to continuously monitor endpoints for suspicious activities, anomalous behavior, and known indicators of compromise. Utilizing advanced technologies like behavioral analysis and machine learning to identify anomalies that may indicate a security breach. Integrating threat intelligence to stay informed about emerging threats and malicious indicators. Response: Once a threat is detected, the response stage focuses on taking immediate action to mitigate the impact. This involves: Investigating and containing the incident, which might involve isolating the affected endpoint from the network. Removing the malicious code or files. Restoring affected systems to their original state. Documenting and reviewing the incident to understand how it occurred and improve security measures for the future. Management and Reporting: This crucial stage involves ongoing management and monitoring of the endpoint protection system to ensure its continued effectiveness. This encompasses: Centralized management of endpoint security solutions to monitor endpoint activity, configure policies, and investigate incidents from a single console. Continuous monitoring of security solutions' performance and conducting regular audits to identify potential weaknesses. Reviewing and updating security protocols and policies based on audits and incident reports. Automating updates for security software and threat definitions. Generating detailed reports on security posture, incidents, and compliance for informed decision-making. These stages form a continuous cycle, constantly adapting to new and evolving threats to ensure comprehensive protection for endpoints and the overall network security.

Prioritization Problem

https://www.sevcosecurity.com/resource-articles/vulnerability-prioritization/ Over the past 20+ years, vulnerability management has evolved from a simple, primarily manual process to a complex multistage process. Responsibilities are spread across multiple teams and some automation has been implemented to improve vulnerability prioritization. But growing backlogs of critical vulnerabilities underscore how there’s still work to be done.

Effective vulnerability management requires comprehensive vulnerability and threat data. A complete inventory of devices, identities, software, and controls is also critical to prioritize vulnerabilities as risks to your environment. And while most vulnerability prioritization processes are focused on software vulnerabilities like CVEs, the most mature vulnerability management programs address vulnerabilities like missing or misconfigured agents, end-of-life systems, cloud misconfigurations, shadow IT, and more.

While we’re primarily focused on providing more effective ways for vulnerability prioritization, it’s important to review how vulnerability management programs have evolved. To address the backlog and threat of vulnerabilities in increasingly complex IT environments, organizations have had to reassess their vulnerability management programs.

The early problem: Vulnerability identification Prior to 1998 and the introduction of Nessus, the free remote security scanner, there were a few tools used to identify vulnerabilities. However, most vulnerability assessments, even with the introduction of Nessus (and for almost a decade later), were local to the system.

The next big problem: Vulnerability classification In 1999, the MITRE Corporation launched CVE (Common Vulnerabilities and Exposures) to identify and categorize vulnerabilities in software and firmware.

Identification and classification With the introduction of scanning tools and CVE, it became possible to continuously assess for vulnerabilities—in a limited scope.

Proliferation of vulnerability assessment brought additional problems As tech stacks and sources of vulns have grown, the “Which of these do I fix?” problem has become a larger, more complex problem. The hay stack has grown and teams are overwhelmed.

But that’s not the ONLY problem. And that’s why organizations need a new approach to address the vulnerability prioritization problem altogether.

Vulnerability Management: An Immature Program

Vulnerability Management: A Maturing Program Vulnerability Management: A Mature Program Vulnerability Management by Team Vulnerability Management: Vulnerability Prioritization

Methods for Prioritization

Historically, organizations have relied on several standardized frameworks to prioritize vulnerabilities, with varying degrees of success. The most common methods include the Common Vulnerability Scoring System (CVSS), the Exploit Prediction Scoring System (EPSS), and Known Exploited Vulnerabilities (KEV). Each of these methods serves a distinct purpose but also has inherent limitations that can hinder effective vulnerability prioritization as part of an overall vulnerability management program.

Common Vulnerability Scoring System (CVSS)

Definition: The Common Vulnerability Scoring System (CVSS) provides a way to capture the principal characteristics of a vulnerability and produce a numerical score reflecting its severity. A numerical score can then be translated into a qualitative representation (such as low, medium, high, and critical) to help organizations properly assess and prioritize vulnerabilities as part of their vulnerability management process.

The Forum of Incident Response and Security Teams (FIRST) was selected by the National Infrastructure Advisory Council (NIAC) to be the custodian of CVSS and has been maintaining it since 2005.

Uses:

CVSS scores help organizations prioritize vulnerabilities based on their potential impact. Security teams often utilize CVSS as part of vulnerability management tools to classify and assess risk levels for vulnerabilities. The Evolution of CVSS

The Common Vulnerability Scoring System was launched by the National Infrastructure Advisory Council (NIAC) in 2005.

CVSS v1 is not that different from what exists today. A few different metric groups have been added to it with some different metrics underneath, but the overall concept of a base formula, temporal formula, and environmental formula still remain.

CVSS v2 was a fast follower to v1. When CVSS v1 was released to criticism, v2 was released to expand the model. For example, v1 didn’t differentiate between impact to confidentiality, integrity, and availability of a system for a vulnerability where an attacker would get root versus user-level access. CVSS v2 is still being reported in some vulnerability management tools.

CVSSv1 (2005) FIRST selected by NIAC to be custodian of CVSS in 2005 CVSSv2 (2007) Added granularity for access complexity and access vector Moved impact bias metric to environmental metric group Differentiated between root and user level access for CIA impact Changes to authentication metric v2 still was criticized for lack of granularity in metrics and inability to distinguish between vulnerability types and risk profiles CVSSv3.X (2015/2019) Added new metrics to Base vector for User Interaction and Privileges Required CIA metrics updated Access Complexity and Attack Vector updates 3.1 added CVSS Extensions Framework allowing a scoring provider to include additional metrics ALL of these models always required you to understand the impact if it were to be exploited. But you can only know and prioritize vulnerabilities if you know what the asset is! Or if it even exists!

CVSS v4

CVSS v4 which was launched on November 1, 2023 included changes that include:

Reinforce the concept that CVSS it not just the Base score Finer granularity through the addition of new Base metrics and values Enhanced disclosure of impact metrics Temporal metric group renamed to Threat metric group New Supplemental Metric Group to convey additional extrinsic attributes of a vulnerability that do not affect the final CVSS-BTE score Additional focus on OT/ICS/Safety

In addition to the change in nomenclature, you can now report on metrics such as the Base metric group (CVSS-B) and Threat metric group (CVSS-BT) base and Environmental metric group (BE) and then the Base + Threat + Environmental metric groups all together (CVSS-BTE). So depending on the terminology you’re seeing associated with CVSS, it’s a compilation of the different metrics getting included in the score.

Limitations with CVSS

Subjectivity: While CVSS aims to provide an objective scoring system, the initial assessment of vulnerabilities often involves subjective interpretation of the impact and exploitability, leading to inconsistencies. Focus on exploitability: CVSS primarily focuses on how a vulnerability can be exploited rather than the actual likelihood of an exploit occurring in a given environment. Lack of context: CVSS does not consider environmental factors to prioritize vulnerabilities such as existing security controls, asset criticality, or business impact, which can render its scores less relevant in specific organizational contexts.

Exploit Prediction Scoring System (EPSS)

Definition: EPSS is a framework designed to estimate the likelihood that a vulnerability will be exploited in the wild. It is based on various risk factors, including the vulnerability’s characteristics and contextual data.

EPSS was launched in 2021, with the current version (v3) released in 2023. It is a daily estimate of the probability of exploitation activity being observed over the next 30 days.

According to FIRST, which also manages EPSS, the system is a “data-driven effort for estimating the likelihood (probability) that a software vulnerability will be exploited in the wild.” EPSS is not a replacement for CVSS, but complements CVSS. EPSS takes into consideration factors such as: the number of reference links associated with a CVE, the market share of the impacted software product, and industries and products that threat actors may be specifically targeting. EPSS does not measure the severity, but does measure exploitability.

Uses:

EPSS scores help organizations prioritize vulnerabilities by predicting the likelihood of exploitation in the next 30 days. This allows security teams to focus on the most pressing risks. EPSS can be combined into a larger dataset (e.g. all vulns on a laptop) This scoring system provides actionable insights that complement CVSS scores, enabling a more nuanced understanding of vulnerability risk.

Limitations:

Data Dependency: EPSS relies heavily on the availability of real-world data regarding exploitation trends. If this data is lacking or not current, the predictive capability of EPSS may be compromised. Complexity: Implementing EPSS can be complex, requiring organizations to integrate it into their existing vulnerability management workflows. This may necessitate additional training and resources. Dynamic Nature: The threat landscape is constantly evolving, and EPSS scores can change over time making it more difficult to prioritize vulnerabilities. This necessitates regular updates and monitoring, which can be resource-intensive.

Known Exploited Vulnerabilities (KEV)

Definition: Cybersecurity and Infrastructure Security Agency (CISA) maintains the KEV catalog which is an authoritative source of vulnerabilities that have been exploited in the wild.

Uses:

The KEV catalog serves as a prioritized list of vulnerabilities that organizations should address immediately. These vulnerabilities have been confirmed to be actively exploited by threat actors. Organizations can use the KEV catalog as one of their inputs to prioritize vulnerabilities. This ensures remediation efforts are focused on the most critical vulnerabilities that pose an imminent threat. Limitations:

Static Nature: KEV lists may not include all vulnerabilities that pose a risk, particularly new or emerging vulnerabilities that have not yet been documented. But companies such as VulnCheck are providing next-generation exploit and vulnerability intelligence. Limited Scope: The focus on actively exploited vulnerabilities may lead organizations to overlook exposures when trying to prioritize vulnerabilities. While not currently exploited, vulnerabilities can still be critical based on their specific environments and risk profiles. Dependency on Reporting: Organizations may be dependent on external reporting and updates to KEV, which can result in delays in addressing emerging threats.

Standard Operating Procedures

Building AI Applications

The 7 Foundational Building Blocks of AI Agents

What I Learned Building Real AI Applications

Here's what I've observed after trying countless agent frameworks and talking to developers building real AI products: the frameworks aren't being used in production.

Most successful AI applications I've seen are built with custom building blocks, not frameworks. This is because most effective "AI agents" aren't actually that agentic at all. They're mostly deterministic software with strategic LLM calls placed exactly where they add value.

The problem is that most frameworks push the "give an LLM some tools and let it figure everything out" approach. But in reality, you don't want your LLM making every decision. You want it handling the one thing it's good at - reasoning with context - while your code handles everything else.

The solution is simpler than most frameworks make it seem. Here's the approach that actually works:

  • Break down what you're actually building into fundamental components
  • Solve each problem with proper software engineering best practices
  • ONLY INCLUDE AN LLM STEP when it's impossible to solve with deterministic code

Making an LLM API call is the most expensive and most dangerous operation in modern software development. While incredibly powerful, you want to avoid it at all costs and only use it when absolutely necessary.

This is especially true for background automation systems. There's a huge difference between building personal assistants (like ChatGPT, Cursor) where users are in the loop, versus building fully automated systems that process information or handle workflows without human intervention. Most of you aren't building the next ChatGPT - you're building background automations to make your work or company more efficient.

But when you do make that LLM call, it's all about context engineering. To get a good answer back, you need the right context at the right time sent to the right model. You need to pre-process all available information, prompts, and user input so the LLM can easily and reliably solve the problem. This is the most fundamental skill in working with LLMs.

AI agents are simply workflows - directed acyclic graphs (DAGs) if you're being precise, or just graphs if you include loops. Most steps in these workflows should be regular code - not LLM calls.

The 7 Building Blocks

Given all of this, you only need these seven core building blocks to solve almost any business problem. Take your big problem, break it down into smaller problems, then solve each one using these building blocks chained together - that's how you build effective AI agents.

These are the fundamental primitives. But how do you actually combine them? That's where workflow orchestration comes in - prompt chaining, routing, reflection, and other agentic patterns that emerge when you combine these building blocks strategically.

1. Intelligence

The only truly "AI" component

This is where the magic happens - and it's surprisingly simple. You send text to an LLM, it thinks about it, and sends text back. That's it. Without this, you just have regular software. The tricky part isn't the LLM call itself - it's everything else you need to build around it.

graph LR
    A[User Input] --> B[LLM Processing] --> C[Generated Response]

title

Code

Intelligence: The "brain" that processes information and makes decisions using LLMs. This component handles context understanding, instruction following, and response generation.

More info at https://platform.openai.com/docs/guides/text?api-mode=responses:

from openai import OpenAI


def basic_intelligence(prompt: str) -> str:
    client = OpenAI()
    response = client.responses.create(model="gpt-4o", input=prompt)
    return response.output_text


if __name__ == "__main__":
    result = basic_intelligence(prompt="What is artificial intelligence?")
    print("Basic Intelligence Output:")
    print(result)

2. Memory

Context persistence across interactions

LLMs don't remember anything from previous messages. Without memory, each interaction starts from scratch because LLMs are stateless. So you need to manually pass in the conversation history each time. This is just storing and passing conversation state - something we've been doing in web apps forever.

graph LR
    A[Previous Context] --> C[LLM Processing]
    B[New Input] --> C
    C --> D[Response]
    C --> E[Updated Context] --> F[Context Storage]

3. Tools

External system integration capabilities

Most of the time you need your LLM to actually do stuff, not just chat. Pure text generation is limited - you want to call APIs, update databases, or read files. Tools let the LLM say "I need to call this function with these parameters" and your code handles the actual execution. This is just normal API integration where the LLM picks what to call and provides JSON input for the arguments.

graph LR
    A[User Input] --> B[LLM Analyzes Request] --> C{Tool Needed?}
    C -->|Yes| D[Select Tool] --> F[Execute Tool] --> G[Tool Result] --> H[LLM Formats Response]
    C -->|No| E[Direct Response]
    H --> I[Final Response]
    E --> I

4. Validation

Quality assurance and structured data enforcement

You need to make sure the LLM returns JSON that matches your expected schema. LLMs are probabilistic and can produce inconsistent outputs, so you validate the JSON output against a predefined structure. If validation fails, you can send it back to the LLM to fix it. This ensures downstream code can reliably work with the data. This is just normal schema validation with retry logic using tools like Pydantic, Zod, or data classes.

graph LR
    A[LLM JSON Output] --> B[Validate Schema] --> C{Valid?}
    C -->|Yes| D[Structured Data]
    C -->|No| E[Send Back to LLM] --> A

5. Control

Deterministic decision-making and process flow

You don't want your LLM making every decision - some things should be handled by regular code. Use if/else statements, switch cases, and routing logic to direct flow based on conditions. This is just normal business logic and routing that you'd write in any application.

graph LR
    A[Input] --> B[Classify Intent] --> C{Intent Type}
    C -->|Question| D[Answer Handler] --> G[Response]
    C -->|Request| E[Request Handler] --> G
    C -->|Complaint| F[Complaint Handler] --> G

6. Recovery

Graceful failure management

Things will go wrong - APIs will be down, LLMs will return nonsense, rate limits will hit you. You need try/catch blocks, retry logic with backoff, and fallback responses when stuff breaks. This is just standard error handling that you'd implement in any production system.

graph LR
    A[Process Request] --> B{Success?}
    B -->|Yes| C[Return Result]
    B -->|No| D[Error Detected] --> E{Retry Possible?}
    E -->|Yes| F[Retry with Backoff] --> A
    E -->|No| G[Execute Fallback] --> H[Fallback Response]

7. Feedback

Human oversight and approval workflows

Sometimes you need a human to check the LLM's work before it goes live. Some decisions are too important or complex for full automation - like sending emails to customers or making purchases. Add approval steps where humans can review and approve/reject before execution. This is just basic approval workflows like you'd build for any app.

graph LR
    A[Generate Response] --> B[Human Review] --> C{Approved?}
    C -->|Yes| D[Execute/Send Response]
    C -->|No| E[Request Revision] --> F[Revise Response] --> B

Mdbook Customization

This is a test.

Glossary

TermDefinition
EndpointAny physical device that connects to and communicates with a computer network. These devices can be traditional computers, laptops, and mobile phones, but also include servers, virtual machines, and Internet of Things (IoT) devices. Essentially, any hardware or virtual application that can send and receive data on a network is considered an endpoint.
Remediateto fix or correct a problem, typically a security vulnerability, a system error, or a performance issue. It involves taking action to address the root cause of the problem and restore the system to a functional and secure state
Mitigationrefers to actions taken to reduce the severity or impact of a potential threat or risk, or to lessen the negative consequences of an adverse event. It's about minimizing the harm caused by something, whether it's a security breach, a natural disaster affecting operations, or a system failure. Mitigation strategies can include implementing preventative measures, developing response plans, and continuously monitoring and reviewing the effectiveness of these actions

Remediate Breakdown

Here's a more detailed breakdown: Correcting errors: Remediation can refer to fixing errors in code, configuration files, or other system components. Addressing vulnerabilities: In cybersecurity, remediation focuses on patching vulnerabilities, updating software, and implementing security controls to eliminate potential threats. Restoring functionality: It can also involve restoring a system to its intended operational state after a failure, outage, or performance degradation. Improving performance: Remediation might also be used to optimize system performance by addressing bottlenecks, resource constraints, or other issues affecting speed and efficiency. Essentially, remediation in IT is about actively resolving problems to ensure systems are reliable, secure, and performing as expected

Mitigation Breakdown

Here's a more detailed breakdown: Risk Mitigation: . In IT security, this often involves identifying vulnerabilities and taking steps to reduce the likelihood of those vulnerabilities being exploited. For example, patching software, implementing firewalls, or using strong passwords are all ways to mitigate risks. Disaster Mitigation: . This focuses on minimizing the impact of disruptions like power outages or natural disasters. Examples include having backup power systems, redundant network connections, and disaster recovery plans. Impact Mitigation: . This deals with minimizing the consequences of an event that has already occurred. For example, having a system in place to quickly restore data after a ransomware attack would be considered impact mitigation. Essentially, mitigation is a proactive and reactive approach to risk management in IT, aiming to reduce the negative effects of potential or actual problems.

Contributors