• Weaponized AI could be bi

    From Mike Powell@1:2320/105 to All on Sun Jan 25 10:16:59 2026
    'Weaponized AI' could be the biggest security threat facing your business
    this year - here's what experts say you should be on the lookout for

    Date:
    Sat, 24 Jan 2026 17:05:00 +0000

    Description:
    AI-driven cybercrime is escalating rapidly, combining phishing, deepfakes,
    and Dark LLMs, forcing businesses to strengthen defences and monitoring systems.

    FULL STORY

    Artificial intelligence is now used by cybercriminals to automate fraud,
    scale phishing campaigns, and industrialize impersonation at a level that was previously impractical.

    Unfortunately, AI-assisted attacks could be among the biggest security
    threats your business faces this year, but staying aware and acting promptly can keep you a step ahead.

    Group-IBs Weaponized AI report shows the growing use of AI by criminals represents a distinct fifth wave of cybercrime, driven by the commercial availability of AI tools rather than isolated experimentation.

    Rise in AI-driven cybercrime activity

    Evidence from dark web monitoring shows that AI-related cybercrime activity
    is not a short-term response to new technologies.

    Group-IB says first-time dark web posts referencing AI-related keywords increased by 371% between 2019 and 2025. The most pronounced acceleration followed the public release of ChatGPT in late 2022, after which interest levels remained persistently high.

    By 2025, tens of thousands of forum discussions each year referenced AI
    misuse, indicating a stable underground market rather than experimental curiosity.

    Group-IB analysts identified at least 251 posts explicitly focused on large language model exploitation, with most references linked to OpenAI-based systems.

    A structured AI crimeware economy has emerged, with at least three vendors offering self-hosted Dark LLMs without safety restrictions. Subscription
    prices range from $30 to $200 per month, with some vendors claiming more than 1,000 users.

    One of the fastest-growing segments is impersonation services, with mentions
    of deepfake tools linked to identity verification bypass rising by 233% year
    on year. Entry-level synthetic identity kits are sold for as little as $5, while real-time deepfake platforms cost between $1,000 and $10,000.

    Group-IB recorded 8,065 deepfake-enabled fraud attempts at a single
    institution between January and August 2025, with verified global losses reaching $347 million.

    AI-assisted malware and API abuse have grown sharply, with AI-generated phishing now embedded in malware-as-a-service platforms and remote access tools.

    Experts warn that AI-powered attacks can bypass traditional defenses unless teams continuously monitor and update systems. Networks need protection from firewalls that can identify unusual traffic and AI-generated phishing attempts.

    With appropriate endpoint protection , companies can detect suspicious
    activity before malware or remote access tools spread.

    Rapid and adaptive malware removal remains critical because AI-enabled
    attacks can execute and propagate faster than standard methods can respond. Combined with a layered security approach and anomaly detection, these
    measures help stop intrusions such as deepfake calls, cloned voices, and fake login attempts.

    ======================================================================
    Link to news story: https://www.techradar.com/pro/weaponized-ai-could-be-the-biggest-security-thre at-facing-your-business-this-year-heres-what-experts-say-you-should-be-on-the- lookout-for

    $$
    --- SBBSecho 3.28-Linux
    * Origin: Capitol City Online (1:2320/105)