Signs Your Company
Has Been Hacked
Cyber attacks are more common than ever. Many businesses don’t realise they’ve been compromised until serious damage occurs. This guide explains how to recognise the signs of a hack and what to do immediately to protect your organisation.
TL;DR (Share this with leadership)
- Early warning signs: odd login patterns, disabled security tools, outbound data spikes, strange email rules, new privileged accounts, finance changes, and staff reporting "weird stuff."
- Trust your logs: identity (SSO/MFA), endpoint/EDR, firewall/IDS, email security, and SaaS audit logs usually tell the story first.
- Act fast, not rash: contain affected accounts/systems, preserve evidence, and follow a documented incident response plan.
- Notify properly: customers, partners, and (if applicable) regulators.
- Harden immediately: patch, rotate secrets, enable/raise MFA, block legacy protocols, and tune monitoring so you don’t miss the next clue.
Why breaches are missed (and how to stop that)
Attackers succeed not only because of technical weaknesses but also operational gaps:
- Alert fatigue: Too many noisy alerts; the important ones get lost.
- Siloed visibility: Email team sees rules changing, but finance sees odd invoices - and no one connects the dots.
- Assumed trust: "That’s probably IT" becomes the default explanation.
- No runbooks: People want to help but don’t know the first five steps, so minutes turn into hours.
Solution: establish a minimal, shared anomaly → action flow (see “First Hour” below) and a clear channel (e.g., #security-incidents or a hotline) for staff to report issues.
Early Warning Signs, Explained (with where to check)
1. Unusual account and identity activity
What you’ll notice
- Impossible travel: logins from Melbourne and then Frankfurt within minutes.
- Off-hours access: midnight admin console activity from accounts that normally work 9–5.
- Multiple failed logins or lockouts, followed by a successful login.
- New privileged accounts or existing accounts added to admin roles without change tickets.
Where to look
- SSO/IdP logs (Azure AD/Microsoft Entra, Okta, Google Workspace): sign-ins, conditional access, MFA challenges.
- MFA provider logs: unusual push fatigue (multiple prompts).
- Directory changes: group membership, role assignments, and recently created service accounts.
Why it matters
Compromise often starts with credentials – phished, reused, or purchased. Identity is the new perimeter; if an attacker can authenticate, they’re halfway home.
Quick countermeasures
- Enforce MFA for everyone (prefer app-based or FIDO2 over SMS).
- Disable legacy protocols (IMAP/POP/Basic Auth).
- Alert on role changes and admin logins outside allow-listed countries/hours.
2. Unexpected system behaviour on endpoints and servers
What you’ll notice
- Sudden slowness or CPU spikes that don’t align with patch cycles.
- Weird pop-ups, certificates, or driver prompts.
- Frequent reboots, services stopping/starting, or shadow copies deleted.
- Unapproved tools appearing (network scanners, RAR/7-Zip staging folders, PSExec, Cobalt Strike beacons).
Where to look
- EDR/XDR (Defender for Endpoint, CrowdStrike, SentinelOne): suspicious processes, lateral movement, credential dumping.
- Windows event logs: service installs, scheduled tasks, RDP logons.
- Linux auditd/journalctl: new cron jobs, SSH keys added, sudoers changes.
Why it matters
Attackers modify endpoints to persist, move laterally, and stage data for exfiltration. These behaviours often precede ransomware.
Quick countermeasures
- Quarantine device in EDR, capture memory/disk before you wipe.
- Block known bad hashes/domains across your stack.
3. Abnormal network and data exfiltration patterns
What you’ll notice
- Outbound traffic spikes to unfamiliar IPs or cloud storage providers.
- Uncommon protocols/ports (SMB over WAN, DNS tunnelling).
- Connections to known-bad hosts or newly registered domains.
Where to look
- Firewall/IDS/IPS/NetFlow: egress volumes by host, new destinations.
- Data Loss Prevention (DLP): large transfers of sensitive files.
- Proxy logs: odd user agents, encrypted traffic to unexpected destinations.
Why it matters
Quiet data theft is often the primary objective. Outbound anomalies are a top indicator you’re already in phase two of an attack.
Quick countermeasures
- Geo-blocking for admin consoles and VPN.
- Egress allow-listing for servers.
- Alert on unusual after-hours egress per host/user baseline.
4. Unauthorised access or privilege escalation
What you’ll notice
- Users suddenly accessing files/systems they never touch.
- Access tokens minted for new OAuth apps with broad scopes.
- Service accounts acquiring permissions not required for their role.
Where to look
- File server/share logs (SMB audit).
- Cloud IAM (AWS/GCP/Azure): policy changes, cross-account roles.
- SaaS audit logs (M365, Google Workspace, Salesforce): app consents, mailbox access.
Why it matters
Modern attacks target OAuth and service principals; once consented, malicious apps persist beyond password/MFA resets.
Quick countermeasures
- Restrict user-granted app consent; require admin approval.
- Review service principal secrets/certificates and rotate regularly.
5. Suspicious email behaviour and BEC patterns
What you’ll notice
- Forwarding rules sending copies to external mailboxes.
- Transport rules that hide or auto-delete certain messages.
- Staff or customers report fake invoice changes or CEO wire requests.
Where to look
- Mailbox audit logs (rules, forwarding, delegate access).
- Secure email gateway: spikes in outbound volumes, unusual destinations.
- DMARC/DKIM/SPF: look for domain spoofing outcomes.
Why it matters
Business Email Compromise (BEC) is rampant and costly. It often doesn’t trigger anti-malware alerts because there is no malware – just social engineering.
Quick countermeasures
- Block auto-forward to external by policy.
- Flag external sender banners.
- Require out-of-band verification for bank detail changes.
6. Security controls disabled or tampered with
What you’ll notice
- AV/EDR switched off, tamper protection disabled.
- Logs missing or rotated right before incidents.
- SIEM shows gaps in expected event sources
Where to look
- EDR/AV admin console: health, sensor status, tamper alerts.
- SIEM: data connector health, ingestion errors.
- Windows/Linux: service status and startup configs.
Why it matters
Attackers try to blind you before staging payloads. A “quiet” environment can be the loudest indicator.
Quick countermeasures
- Enable tamper protection everywhere.
- Treat gaps in logging as a high-severity incident.
7. Finance irregularities and vendor record changes
What you’ll notice
- Updated supplier bank details via email, often “urgent.”
- Unusual wire transfers just under approval thresholds.
- Invoice template changes or new payment portals.
Where to look
- ERP/AP audit logs: vendor profile edits, approval exceptions.
- Email headers: lookalike domains (examp1e.com).
- Ticketing/CRM: patterns of “payment update” requests.
Why it matters
Attackers monetise access quickly. Finance changes are a clean, high-signal sign that email or identities are already compromised.
Quick countermeasures
- Dual control on vendor detail changes.
- Mandatory call-back verification using known numbers (not the email).
8. Staff reporting “weird stuff”
What you’ll hear
- “My password stopped working.”
- “Authenticator prompts popped up while I wasn’t logging in.”
- “Files disappeared or were renamed.”
- “My device fan is screaming when I’m idle.”
Where to look
- Helpdesk tickets, Teams/Slack channels, line manager emails.
- EDR telemetry around reported timestamps.
Why it matters
Humans notice anomalies before systems do. Culture determines whether those anomalies are reported.
Quick countermeasures
- Promote a no-blame reporting culture.
- A simple “Report Suspicious” button in email clients accelerates triage.
9. Third-party notifications
What you’ll recive
- Partners: “We’re receiving phishing from your domain.”
- Law enforcement: “We found your data for sale.”
- Providers: “Suspicious activity on your tenant.”
Where to look
- Copilot Abuse@ mailbox, security distribution lists.
- Dark web monitoring alerts from your provider.
Why it matters
External eyes often see what you can’t. Take these seriously; they’re rarely sent lightly.
Quick countermeasures
- Acknowledge, open an incident, and request artifacts (headers, timestamps, IoCs).
Cloud & SaaS red flags (don’t overlook these)
- Microsoft 365 / Google Workspace: OAuth app consents with high scopes, mass mailbox rule creation, “impossible travel,” legacy auth use.
- AWS/GCP/Azure: new access keys, unusual regions, public S3 buckets created, Security Group openings to 0.0.0.0/0, disabled CloudTrail/Activity Logs.
- Collab tools (SharePoint/Drive/Slack): external sharing spikes, public links for sensitive folders.
- Backup platforms: unexpected deletion of restore points. (Attackers target backups first.)
Ransomware precursors to watch for
- Credential dumping (LSASS access, Mimikatz-like behaviour).
- Mass discovery (PowerShell/WMIC inventory sweeps).
- Lateral movement (RDP/SMB auth storms, PSExec usage).
- Disabling recovery (vssadmin delete, shadow copy removal).
- Encryption test runs on small folders.
If you see two or more of these in short succession, treat as critical.
Quick-reference table (Indicator → Where to check → Immediate action)
Indicator | Where to check | First action |
---|---|---|
New admin account | IdP/Directory audit | Disable/suspend; review recent role grants |
Off-hours successful login | SSO sign-in logs | Force password reset; require re-MFA |
External auto-forward rule | Mailbox/tenant rules | Remove; block external forwarding org-wide |
Outbound traffic spike | Firewall/NetFlow | Block destination; isolate host |
EDR disabled | EDR console | Re-enable with tamper protection; investigate stop event |
Vendor bank details changed | ERP/AP audit | Freeze payment; call vendor via known number |
Backup snapshots deleted | Backup console | Lock admin account; verify immutable/offline copies |
What to do when you suspect a breach
Golden rule: contain fast, preserve evidence, and communicate clearly.
The First 60 Minutes
1. Open an incident ticket and timestamp everything.
2. Isolate affected accounts/devices.
- Disable or require re-MFA for accounts.
- Quarantine endpoints in EDR (but avoid powering off servers unless required).
3. Preserve evidence before remediation:
- Capture memory/disk images for key hosts.
- Export relevant logs (SSO, EDR, firewall, email, SaaS).
4. Block active egress to suspicious destinations.
5. Establish a comms channel(war-room chat + bridge) and appoint a coordinator.
6. Decide the immediate business impact(payments, email, critical services) and implement temporary controls (e.g., payment freeze).
The First 24 Hours
- Scope the incident: affected users, systems, data at risk.
- Rotate secrets: admin passwords, API keys, service account credentials.
- Patch and close the entry point (phished account, vulnerable VPN, outdated plugin).
- Notify stakeholders as required (leadership, legal, customers, partners).
- Encryption test runs on small folders.
- If regulated, prepare for breach notification obligations applicable to your jurisdiction and industry.
The First Week
- Eradicate persistence: remove backdoors, rogue OAuth apps, startup tasks, scheduled jobs.
- Threat hunt adjacent systems for indicators.
- Validate backups and perform clean restores where needed.
- Tune monitoring to alert earlier on the behaviours you just experienced.
- Lessons learned: document timeline, root cause, and control improvements; brief leadership.
Practical hardening after any incident
- Identity: enforce MFA tenant-wide, block legacy auth, conditional access by risk/location, periodic access reviews.
- Endpoints: EDR on 100% of devices, tamper protection, application control (allow-listing), local admin lockdown.
- Email: DMARC “reject”, disable external auto-forward, external sender tags, impersonation protection.
- Network: least-privilege segmentation, egress filtering, VPN with MFA, disable unused ports.
- Backups: 3-2-1 rule, immutable/offline copies, test restores monthly.
- SaaS/Cloud: centralized logging, admin approval for OAuth, geo/IP restriction for admin portals
- Human layer: ongoing phishing simulations, incident drills, “report suspicious” one-click buttons.
- Governance: adopt a control framework (e.g., NIST CSF) and measure progress quarterly.
Sample playbook snippet (drop into your runbook)
When a staff member reports an unfamiliar MFA prompt:
1. Suspend the account or require re-registration of MFA.
2. Review sign-in logs for the last 7 days; note IP/ISP/device.
3. Reset password and all sessions; revoke refresh tokens.
4. Check mailbox rules and OAuth consents; remove anything unrecognised.
5. Search SIEM for the same IP across other sign-ins; pivot to endpoints.
6. Document and brief the incident coordinator.
Call to Action
If any of the signals in this guide feel familiar-or you’d like a readiness assessment and a 24/7 monitoring plan-Computing Australia can help. From rapid incident response to ongoing MDR and NIST-aligned hardening, we’ll get you from uncertainty to control.
FAQ
How do I tell a false alarm from a real threat?
Look for correlated anomalies: off-hours admin login + new forwarding rule + outbound data spike is high-signal. One odd log can be benign; three related ones are not.
Should I turn off a compromised server?
Prefer isolation over powering off. Quarantine the host to preserve volatile memory (critical evidence) unless active destruction is underway.
Will resetting passwords fix everything?
Not if the attacker established persistence (OAuth app, backdoor service, Golden SAML). You must hunt and remove those footholds.
What about small businesses without a SOC?
Use built-in tools (M365/Google audit, EDR, firewall logs) and establish simple, repeatable steps. Consider a managed detection & response (MDR) partner.
When do we notify customers/regulators?
As soon as you confirm a breach affecting personal or regulated data, follow your legal obligations and contractual commitments. Consult legal counsel.