34 Commits
v1.0 ... v4.0

Author SHA1 Message Date
Sergey
938031f1de Use RawHTTP library to process HTTP streams (packmate/Packmate!23) 2023-07-31 15:42:17 +00:00
Sergey
7986658bd1 Update configuration 2023-07-26 18:21:49 +00:00
Sergey
4fed53244d Merge branch 'update-frontend' into 'master'
Update frontend to show packet offsets

See merge request packmate/Packmate!21
2023-07-24 22:30:47 +00:00
Sergey Shkurov
37fd548364 Update frontend to show packet offsets 2023-07-25 02:28:56 +04:00
Sergey Shkurov
fcd7918125 Update frontend to use dark theme 2023-05-01 23:23:11 +02:00
Sergey Shkurov
c88ca8abbd Update frontend 2023-05-01 21:18:26 +02:00
Sergey
15206188a2 Merge branch 'display-stream-size' into 'master'
Display stream size

See merge request packmate/Packmate!20
2023-04-30 22:22:01 +00:00
Sergey
4346445af9 Display stream size 2023-04-30 22:22:01 +00:00
Sergey
f1d67f696d Merge branch 'pattern-updates' into 'master'
Pattern updates

Closes #32

See merge request packmate/Packmate!19
2023-04-30 00:08:15 +00:00
Sergey Shkurov
4b45f7dee7 Update frontend 2023-04-30 01:50:05 +02:00
Sergey Shkurov
a8ee7363d4 Revert adding field 2023-04-29 04:51:57 +02:00
Sergey Shkurov
25d0921aed Update frontend 2023-04-29 04:40:46 +02:00
Sergey Shkurov
73fa5b1373 Add support for pattern updating 2023-04-28 04:08:16 +02:00
Sergey Shkurov
40136ad9d9 Update ServiceController endpoints 2023-04-28 03:59:01 +02:00
Sergey Shkurov
0b50f202fc Move dto transformation into services 2023-04-28 03:27:28 +02:00
Sergey Shkurov
288d24fffc Send pattern ids instead of patterns in streams 2023-04-28 02:02:28 +02:00
Sergey
40b42934b6 Merge branch 'pattern-removal' into 'master'
Implement pattern removal

Closes #29

See merge request packmate/Packmate!18
2023-04-27 23:19:16 +00:00
Sergey
4cd5e72fee Implement pattern removal 2023-04-27 23:19:16 +00:00
Sergey
145f3e63c8 Merge branch 'update-versions' into 'master'
Update versions

See merge request packmate/Packmate!17
2023-04-27 21:22:40 +00:00
Sergey Shkurov
6ea53719fd Remove DISTINCT 2023-04-27 23:19:19 +02:00
Sergey Shkurov
8bbd135e96 Refactor code 2023-04-27 22:35:03 +02:00
Sergey Shkurov
79315c3c18 Update jna dependency for MacOS 2023-04-27 22:35:02 +02:00
Sergey Shkurov
67c5462018 Fix a possible bug 2023-04-27 22:35:02 +02:00
Sergey Shkurov
4e2473a3cc Update libraries 2023-04-27 22:35:02 +02:00
Sergey Shkurov
ea45f1b9e5 Use gradle.kts 2023-04-27 22:35:02 +02:00
Sergey Shkurov
93ec39b561 Prepare to move to gradle.kts 2023-04-27 22:35:02 +02:00
Sergey Shkurov
7878ecebfc Fix hashtag symbols becoming links 2023-04-27 22:35:02 +02:00
Sergey Shkurov
7afb9dc5fb Update Spring Boot 2 2023-04-27 22:35:02 +02:00
Sergey Shkurov
8d33c6a6e1 Update gradle version 2023-04-27 22:35:02 +02:00
Sergey
1b6e619475 Merge branch 'failure-analyzer' into 'master'
Add failure analyzers

Closes #30

See merge request packmate/Packmate!16
2023-04-25 16:19:49 +00:00
Sergey Shkurov
0d756ec39c Add failure analyzer for incorrect interface name 2023-04-25 11:28:28 +02:00
Sergey Shkurov
eef33308a5 Add failure analyzer for incorrect pcap file 2023-04-24 02:20:21 +03:00
Sergey
5be73b4b61 Merge branch 'update-docs' into 'master'
Update docs

See merge request packmate/Packmate!15
2023-04-14 00:58:44 +00:00
Sergey
872e27b926 Update docs 2023-04-14 00:58:44 +00:00
72 changed files with 1221 additions and 904 deletions

View File

@@ -25,7 +25,14 @@
* Расшифровывает TLS на RSA при наличии приватного ключа
![Скриншот главного окна](screenshots/Screenshot.png)
## Клонирование
## Быстрый запуск
Для быстрого запуска Packmate следует использовать [этот стартер](https://gitlab.com/packmate/starter/-/blob/master/README.md).
## Полный запуск
Ниже следует инструкция для тех, кто хочет собрать Packmate самостоятельно.
### Клонирование
Поскольку этот репозиторий содержит фронтенд как git submodule, его необходимо клонировать так:
```bash
git clone --recurse-submodules https://gitlab.com/packmate/Packmate.git
@@ -40,102 +47,20 @@ git pull # Забираем свежую версию мастер-репы и
git submodule update --init --recursive
```
## Подготовка
В этом ПО используется Docker и docker-compose. В образ `packmate-app` пробрасывается
сетевой интерфейс хоста, его название указывается переменной окружения (об этом ниже).
`packmate-db` настроен на прослушивание порта 65001 с локальным IP.
Файлы БД сохраняются в ./data, поэтому для обнуления базы нужно удалить эту папку.
### Настройка
Программа берет основные настройки из переменных окружения, поэтому для удобства
можно создать env-файл.
Он должен называться `.env` и лежать в корневой директории проекта.
В файле необходимо прописать:
```dotenv
# Локальный IP сервера на указанном интерфейсе или в pcap файле
PACKMATE_LOCAL_IP=192.168.1.124
# Имя пользователя для web-авторизации
PACKMATE_WEB_LOGIN=SomeUser
# Пароль для web-авторизации
PACKMATE_WEB_PASSWORD=SomeSecurePassword
```
Если мы перехватываем трафик сервера (лучший вариант, если есть возможность):
```dotenv
# Режим работы - перехват
PACKMATE_MODE=LIVE
# Интерфейс, на котором производится перехват трафика
PACKMATE_INTERFACE=wlan0
```
Если мы анализируем pcap дамп:
```dotenv
# Режим работы - анализ файла
PACKMATE_MODE=FILE
# Путь до файла от корня проекта
PACKMATE_PCAP_FILE=dump.pcap
```
Или если мы хотим посмотреть уже обработанный трафик (например, для разбора после игры):
```dotenv
PACKMATE_MODE=VIEW
```
При захвате живого трафика рекомендуется включать удаление старых стримов, иначе ближе к концу
соревнования анализатор будет медленнее работать.
```dotenv
PACKMATE_OLD_STREAMS_CLEANUP_ENABLED=true
# Интервал удаления старых стримов (в минутах).
# Лучше ставить маленькое число, чтобы стримы удалялись маленькими кусками, и это не нагружало систему
PACKMATE_OLD_STREAMS_CLEANUP_INTERVAL=1
# Насколько старым стрим должен быть для удаления (в минутах от текущего времени)
PACKMATE_OLD_STREAMS_CLEANUP_THRESHOLD=240
```
Чтобы использовать расшифровку TLS, нужно положить соответствующий приватный ключ, который
использовался для генерации сертификата, в папку `rsa_keys`.
[Инструкция](docs/SETUP.md)
### Запуск
После указания нужных настроек в env-файле, можно запустить приложение:
```bash
sudo docker-compose up --build -d
sudo docker compose up --build -d
```
При успешном запуске Packmate будет видно с любого хоста на порту `65000`.
### Начало работы
При попытке зайти в web-интерфейс впервые, браузер спросит логин и пароль,
который указывался в env-файле.
При необходимости можно настроить дополнительные параметры по кнопке с шестеренками в верхнем
правом углу экрана.
![Скриншот настроек](screenshots/Screenshot_Settings.png)
Все настройки сохраняются в local storage и теряются только при смене IP-адреса или порта сервера.
БД будет слушать на порту 65001, но будет разрешать подключения только с localhost.
## Использование
Сначала нужно создать сервисы, находящиеся в игре.
Для этого вызывается диалоговое окно по нажатию кнопки `+` в навбаре,
где можно указать название и порт сервиса, а также дополнительные опции.
Для удобного отлова флагов в приложении существует система паттернов.
Чтобы создать паттерн, нужно открыть выпадающее меню `Patterns` и нажать кнопку `+`,
затем указать нужный тип поиска, сам паттерн, цвет подсветки в тексте и прочее.
Если выбрать тип паттерна IGNORE, то стримы, попадающие под шаблон, автоматически будут удаляться.
Это может пригодиться, чтобы не засорять БД трафиком с эксплоитами, которые уже были запатчены.
В режиме LIVE система начнет автоматически захватывать стримы и отображать их в сайдбаре.
В режиме FILE для начала обработки файла нужно нажать соответствующую кнопку в сайдбаре.
При нажатии на стрим в главном контейнере выводится список пакетов;
между бинарным и текстовым представлением можно переключиться по кнопке в сайдбаре.
### Горячие клавиши
Для быстрой навигации по стримам можно использовать следующие горячие клавиши:
* `Ctrl+Up` -- переместиться на один стрим выше
* `Ctrl+Down` -- переместиться на один стрим ниже
* `Ctrl+Home` -- перейти на последний стрим
* `Ctrl+End` -- перейти на первый стрим
[Инструкция](docs/USAGE.md)
<div align="right">

View File

@@ -25,7 +25,14 @@ Advanced network traffic flow analyzer for A/D CTFs.
* Can automatically decrypt TLS with RSA using given private key (like Wireshark)
![Main window](screenshots/Screenshot.png)
## Cloning
## Quick Start
To quickly start using Packmate, use [this starter](https://gitlab.com/packmate/starter/-/blob/master/README_EN.md).
## Full Build
Below are the instructions for those who want to build Packmate on their own.
### Cloning
As this repository contains frontend part as a git submodule, it has to be cloned like this:
```bash
git clone --recurse-submodules https://gitlab.com/packmate/Packmate.git
@@ -40,54 +47,8 @@ git pull
git submodule update --init --recursive
```
## Preparation
This program uses Docker and docker-compose.
`packmate-db` will listen to port 65001 at localhost.
Database files are saved in ./data, so in order to reset database you'll have to delete that directory.
### Settings
This program retrieves settings from environment variables,
so it would be convenient to create an env file;
It must be called `.env` and located at the root of the project.
Contents of the file:
```bash
# Local IP on network interface or in pcap file to tell incoming packets from outgoing
PACKMATE_LOCAL_IP=192.168.1.124
# Username for the web interface
PACKMATE_WEB_LOGIN=SomeUser
# Password for the web interface
PACKMATE_WEB_PASSWORD=SomeSecurePassword
```
If we are capturing live traffic (best option if possible):
```bash
# Mode: capturing
PACKMATE_MODE=LIVE
# Interface to capture on
PACKMATE_INTERFACE=wlan0
```
If we are analyzing pcap dump:
```bash
# Mode: dump analyzing
PACKMATE_MODE=FILE
# Path to pcap file from project root
PACKMATE_PCAP_FILE=dump.pcap
```
When capturing live traffic it's better to turn on old streams removal. Otherwise, after some time Packmate
will start working slower.
```dotenv
PACKMATE_OLD_STREAMS_CLEANUP_ENABLED=true
# Old streams removal interval (in minutes).
# It's better to use small numbers so the streams are removed in small chunks and don't overload the server.
PACKMATE_OLD_STREAMS_CLEANUP_INTERVAL=1
# How old the stream must be to be removed (in minutes before current time)
PACKMATE_OLD_STREAMS_CLEANUP_THRESHOLD=240
```
To decrypt TLS, put the private key used to generate a certificate into the `rsa_keys` folder.
### Setup
[Instructions](docs/SETUP_EN.md)
### Launch
After filling in env file you can launch the app:
@@ -95,42 +56,11 @@ After filling in env file you can launch the app:
sudo docker-compose up --build -d
```
If everything went fine, Packmate will be available on port `65000` from any host
### Accessing the web interface
When you open a web interface for the first time, you will be asked for a login and password
you specified in the env file.
After entering the credentials, open the settings by clicking the cogs
in the top right corner and modify additional parameters.
![Settings](screenshots/Screenshot_Settings.png)
All settings are saved in the local storage and will be
lost only upon changing server IP or port.
If everything went fine, Packmate will be available on port `65000` from any host.
Database with listen on port 65001, but will only accept connections from localhost.
## Usage
First of all, you should create game services.
To do that, click `+` in the navbar,
then fill in the service name, port, and optimizations to perform on streams.
For a simple monitoring of flags, there is a system of patterns.
To create a pattern, open `Patterns` dropdown menu, press `+`, then
specify the type of pattern, the pattern itself, highlight color and other things.
If you choose IGNORE as the type of a pattern, all matching streams will be automatically deleted.
This can be useful to filter out exploits you have already patched against.
In LIVE mode the system will automatically capture streams and show them in a sidebar.
In FILE mode you'll have to press appropriate button in a sidebar to start processing a file.
Note that you should only do that after all services are created.
Click at a stream to view a list of packets;
you can click a button in the sidebar to switch between binary and text views.
### Shortcuts
To quickly navigate streams you can use the following shortcuts:
* `Ctrl+Up` -- go to the next stream
* `Ctrl+Down` -- go to the previous stream
* `Ctrl+Home` -- go to the latest stream
* `Ctrl+End` -- go to the first stream
[Instructions](docs/USAGE_EN.md)
<div align="right">

View File

@@ -1,50 +0,0 @@
plugins {
id 'org.springframework.boot' version '2.6.3'
id 'java'
}
apply plugin: 'io.spring.dependency-management'
group = 'ru.serega6531'
version = '1.0-SNAPSHOT'
sourceCompatibility = JavaVersion.VERSION_17
targetCompatibility = JavaVersion.VERSION_17
configurations {
compileOnly {
extendsFrom annotationProcessor
}
}
repositories {
mavenCentral()
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
implementation 'org.springframework.boot:spring-boot-starter-web'
implementation "org.springframework.boot:spring-boot-starter-security"
implementation "org.springframework.boot:spring-boot-starter-websocket"
implementation 'org.springframework.session:spring-session-core'
implementation 'com.github.jmnarloch:modelmapper-spring-boot-starter:1.1.0'
implementation group: 'org.apache.commons', name: 'commons-lang3', version: '3.12.0'
implementation group: 'commons-io', name: 'commons-io', version: '2.11.0'
implementation 'org.pcap4j:pcap4j-core:1.8.2'
implementation 'org.pcap4j:pcap4j-packetfactory-static:1.8.2'
implementation group: 'com.google.guava', name: 'guava', version: '31.0.1-jre'
implementation group: 'org.java-websocket', name: 'Java-WebSocket', version: '1.5.1'
implementation group: 'org.bouncycastle', name: 'bcprov-jdk15on', version: '1.69'
implementation group: 'org.bouncycastle', name: 'bctls-jdk15on', version: '1.70'
implementation group: 'org.modelmapper', name: 'modelmapper', version: '2.4.5'
compileOnly 'org.jetbrains:annotations:22.0.0'
compileOnly 'org.projectlombok:lombok'
runtimeOnly 'org.springframework.boot:spring-boot-devtools'
runtimeOnly 'org.postgresql:postgresql'
annotationProcessor 'org.projectlombok:lombok'
testImplementation 'org.junit.jupiter:junit-jupiter:5.8.2'
}
test {
useJUnitPlatform()
}

61
build.gradle.kts Normal file
View File

@@ -0,0 +1,61 @@
plugins {
id("org.springframework.boot") version "3.0.6"
id("java")
id("io.spring.dependency-management") version "1.1.0"
}
group = "ru.serega6531"
version = "1.0-SNAPSHOT"
java {
sourceCompatibility = JavaVersion.VERSION_17
targetCompatibility = JavaVersion.VERSION_17
}
configurations {
get("compileOnly").apply {
extendsFrom(configurations.annotationProcessor.get())
}
}
repositories {
mavenCentral()
}
dependencies {
implementation("org.springframework.boot:spring-boot-starter-data-jpa")
implementation("org.springframework.boot:spring-boot-starter-web")
implementation("org.springframework.boot:spring-boot-starter-security")
implementation("org.springframework.boot:spring-boot-starter-websocket")
implementation("org.springframework.session:spring-session-core")
implementation(group = "org.apache.commons", name = "commons-lang3", version = "3.12.0")
implementation(group = "commons-io", name = "commons-io", version = "2.11.0")
implementation("org.pcap4j:pcap4j-core:1.8.2")
implementation("org.pcap4j:pcap4j-packetfactory-static:1.8.2")
constraints {
implementation("net.java.dev.jna:jna:5.13.0") {
because("upgraded version required to run on MacOS")
// https://stackoverflow.com/questions/70368863/unsatisfiedlinkerror-for-m1-macs-while-running-play-server-locally
}
}
implementation(group = "com.google.guava", name = "guava", version = "31.1-jre")
implementation(group = "org.java-websocket", name = "Java-WebSocket", version = "1.5.3")
implementation(group = "org.bouncycastle", name = "bcprov-jdk15on", version = "1.70")
implementation(group = "org.bouncycastle", name = "bctls-jdk15on", version = "1.70")
implementation(group = "org.modelmapper", name = "modelmapper", version = "3.1.1")
implementation("com.athaydes.rawhttp:rawhttp-core:2.5.2")
compileOnly("org.jetbrains:annotations:24.0.1")
compileOnly("org.projectlombok:lombok")
runtimeOnly("org.springframework.boot:spring-boot-devtools")
runtimeOnly("org.postgresql:postgresql")
annotationProcessor("org.projectlombok:lombok")
testImplementation("org.junit.jupiter:junit-jupiter:5.9.2")
}
tasks.getByName<Test>("test") {
useJUnitPlatform()
}

View File

@@ -2,7 +2,6 @@ services:
packmate: # port = 65000
environment:
DB_PASSWORD: ${PACKMATE_DB_PASSWORD:-K604YnL3G1hp2RDkCZNjGpxbyNpNHTRb}
DB_NAME: ${PACKMATE_DB_NAME:-packmate}
INTERFACE: ${PACKMATE_INTERFACE:-}
LOCAL_IP: ${PACKMATE_LOCAL_IP}
MODE: ${PACKMATE_MODE:-LIVE}
@@ -20,16 +19,9 @@ services:
dockerfile: docker/Dockerfile_app
network_mode: "host"
image: registry.gitlab.com/packmate/packmate:${BUILD_TAG:-latest}
command: [
"java", "-Djava.net.preferIPv4Stack=true", "-Djava.net.preferIPv4Addresses=true",
"-jar", "/app/app.jar", "--spring.datasource.url=jdbc:postgresql://127.0.0.1:65001/$${DB_NAME}",
"--spring.datasource.password=$${DB_PASSWORD}",
"--capture-mode=$${MODE}", "--pcap-file=$${PCAP_FILE}",
"--interface-name=$${INTERFACE}", "--local-ip=$${LOCAL_IP}", "--account-login=$${WEB_LOGIN}",
"--old-streams-cleanup-enabled=$${OLD_STREAMS_CLEANUP_ENABLED}", "--cleanup-interval=$${OLD_STREAMS_CLEANUP_INTERVAL}",
"--old-streams-threshold=$${OLD_STREAMS_CLEANUP_THRESHOLD}",
"--account-password=$${WEB_PASSWORD}", "--server.port=65000", "--server.address=0.0.0.0"
]
volumes:
- "./pcaps/:/app/pcaps/:ro"
- "./rsa_keys/:/app/rsa_keys/:ro"
depends_on:
db:
condition: service_healthy
@@ -38,7 +30,7 @@ services:
environment:
POSTGRES_USER: packmate
POSTGRES_PASSWORD: ${PACKMATE_DB_PASSWORD:-K604YnL3G1hp2RDkCZNjGpxbyNpNHTRb}
POSTGRES_DB: ${PACKMATE_DB_NAME:-packmate}
POSTGRES_DB: packmate
env_file:
- .env
volumes:

View File

@@ -13,4 +13,17 @@ FROM eclipse-temurin:17-jre
WORKDIR /app
RUN apt update && apt install -y libpcap0.8 && rm -rf /var/lib/apt/lists/*
COPY --from=1 /tmp/compile/build/libs/packmate-*-SNAPSHOT.jar app.jar
CMD [ "java", "-Djava.net.preferIPv4Stack=true", "-Djava.net.preferIPv4Addresses=true", \
"-jar", "/app/app.jar", "--spring.datasource.url=jdbc:postgresql://127.0.0.1:65001/packmate", \
"--spring.datasource.password=${DB_PASSWORD}", \
"--packmate.capture-mode=${MODE}", "--packmate.pcap-file=${PCAP_FILE}", \
"--packmate.interface-name=${INTERFACE}", "--packmate.local-ip=${LOCAL_IP}", \
"--packmate.web.account-login=${WEB_LOGIN}", "--packmate.web.account-password=${WEB_PASSWORD}", \
"--packmate.cleanup.enabled=${OLD_STREAMS_CLEANUP_ENABLED}", \
"--packmate.cleanup.interval=${OLD_STREAMS_CLEANUP_INTERVAL}", \
"--packmate.cleanup.threshold=${OLD_STREAMS_CLEANUP_THRESHOLD}", \
"--server.port=65000", "--server.address=0.0.0.0" \
]
EXPOSE 65000

89
docs/SETUP.md Normal file
View File

@@ -0,0 +1,89 @@
## Настройка
Packmate использует настройки из файла `.env` (в той же папке, что и `docker-compose.yml`)
### Основные настройки
```dotenv
# Локальный IP сервера, на который приходит игровой трафик
PACKMATE_LOCAL_IP=10.20.1.1
# Имя пользователя для web-авторизации
PACKMATE_WEB_LOGIN=SomeUser
# Пароль для web-авторизации
PACKMATE_WEB_PASSWORD=SomeSecurePassword
```
### Режим работы
Packmate поддерживает три основных режима работы: `LIVE`, `FILE` и `VIEW`.
1. `LIVE` - это основной режим работы во время CTF. Packmate обрабатывает живой трафик и сразу выводит результаты.
2. `FILE` - обрабатывает трафик из pcap файлов. Полезен для анализа трафика с прошедших CTF, где не был запущен Packmate, или тех, где невозможно запустить его на вулнбоксе.
3. `VIEW` - Packmate не обрабатывает трафик, а только показывает уже обработанные стримы. Полезен для разборов после завершения CTF.
<details>
<summary>Настройка LIVE</summary>
Необходимо указать интерфейс, через который проходит игровой трафик.
На этом же интерфейсе должен располагаться ip, указанный в параметре `PACKMATE_LOCAL_IP`
```dotenv
# Режим работы - перехват
PACKMATE_MODE=LIVE
# Интерфейс, на котором производится перехват трафика
PACKMATE_INTERFACE=game
```
</details>
<details>
<summary>Настройка FILE</summary>
Необходимо указать название pcap файла, лежащего в папке pcaps.
После запуска в веб-интерфейсе появится кнопка, активирующая чтение файла.
Важно, чтобы к этому моменту уже были созданы сервисы и паттерны (см. раздел Использование).
```dotenv
# Режим работы - анализ файла
PACKMATE_MODE=FILE
# Название файла в папке pcaps
PACKMATE_PCAP_FILE=dump.pcap
```
</details>
<details>
<summary>Настройка VIEW</summary>
В этом режиме Packmate просто показывает уже имеющиеся данные.
```dotenv
# Режим работы - просмотр
PACKMATE_MODE=VIEW
```
</details>
### Очистка БД
На крупных CTF через какое-то время накапливается большое количество трафика. Это замедляет работу Packmate и занимает много места на диске.
Для оптимизации работы, рекомендуется включить регулярную очистку БД от старых стримов. Это будет работать только в режиме `LIVE`.
```dotenv
PACKMATE_OLD_STREAMS_CLEANUP_ENABLED=true
# Интервал удаления старых стримов (в минутах).
# Лучше ставить маленькое число, чтобы стримы удалялись маленькими кусками, и это не нагружало систему
PACKMATE_OLD_STREAMS_CLEANUP_INTERVAL=1
# Насколько старым стрим должен быть для удаления (в минутах от текущего времени)
PACKMATE_OLD_STREAMS_CLEANUP_THRESHOLD=240
```
### Дополнительные настройки
```dotenv
# Пароль от БД. Из-за того, что БД принимает подключения только с localhost, менять его необязательно, но можно, для дополнительной безопасности.
PACKMATE_DB_PASSWORD=K604YnL3G1hp2RDkCZNjGpxbyNpNHTRb
# Версия Packmate. Можно изменить, если нужно использовать другой образ из docker registry.
BUILD_TAG=latest
```
Чтобы использовать расшифровку TLS (с RSA), нужно положить соответствующий приватный ключ, который
использовался для генерации сертификата, в папку `rsa_keys`.
Файлы БД сохраняются в ./data, поэтому для обнуления базы нужно удалить эту папку.

88
docs/SETUP_EN.md Normal file
View File

@@ -0,0 +1,88 @@
## Setup
Packmate uses properties from the `.env` file (in the same directory as `docker-compose.yml`)
### Primary settings
```dotenv
# Local IP of a server on which the traffic in directed. Used to tell incoming packets from outgoing.
PACKMATE_LOCAL_IP=10.20.1.1
# Username for the web interface
PACKMATE_WEB_LOGIN=SomeUser
# Password for the web interface
PACKMATE_WEB_PASSWORD=SomeSecurePassword
```
### Modes of operation
Packmate supports 3 modes of operation: `LIVE`, `FILE` и `VIEW`.
1. `LIVE` - the usual mode during a CTF. Packmate processes live traffic and instantly displays the results.
2. `FILE` - processes traffic from pcap files. Useful to analyze traffic from past CTFs where Packmate wasn't launched, or CTFs where it's impossible to use it on the vulnbox.
3. `VIEW` - Packmate does not process any traffic, but simply shows already processed streams. Useful for post-game analyses.
<details>
<summary>LIVE setup</summary>
Set the interface through which the game traffic passes.
IP address from `PACKMATE_LOCAL_IP` should be bound to the same interface.
```dotenv
# Mode: capturing
PACKMATE_MODE=LIVE
# Interface to capture on
PACKMATE_INTERFACE=game
```
</details>
<details>
<summary>FILE setup</summary>
Set the name of the pcap file in the `pcaps` directory.
After the startup, in the web interface, you will see the button that activates the file processing.
It's important that by this moment all services and patterns are already created (see Usage).
```dotenv
# Mode: pcap file anysis
PACKMATE_MODE=FILE
# Path to pcap file in the pcaps directory
PACKMATE_PCAP_FILE=dump.pcap
```
</details>
<details>
<summary>VIEW setup</summary>
In that mode, Packmate simply shows already existing data.
```dotenv
# Mode: viewing the data
PACKMATE_MODE=VIEW
```
</details>
### Database cleanup
On large CTFsб after some time a lot of traffic will pile up. This can slow Packmate down and take a lot of drive space.
To optimize the workflow, it is recommended to enable periodical database cleanup of old streams. It will only work in the `LIVE` mode.
```dotenv
PACKMATE_OLD_STREAMS_CLEANUP_ENABLED=true
# Old streams removal interval (in minutes).
# It's better to use small numbers so the streams are removed in small chunks and don't overload the server.
PACKMATE_OLD_STREAMS_CLEANUP_INTERVAL=1
# How old the stream must be to be removed (in minutes before current time)
PACKMATE_OLD_STREAMS_CLEANUP_THRESHOLD=240
```
### Additional settings
```dotenv
# Database password. Considering it only listens on localhost, it's not mandatory to change it, but you can do it for additional security.
PACKMATE_DB_PASSWORD=K604YnL3G1hp2RDkCZNjGpxbyNpNHTRb
# Packmate version. Change it if you want to use a different version from the docker registry.
BUILD_TAG=latest
```
To use the TLS decryption, you have to put the matching private key in the `rsa_keys` directory.
Database files are being saved in `./data`, so to reset the database, you need to delete this directory.

117
docs/USAGE.md Normal file
View File

@@ -0,0 +1,117 @@
## Использование
### Настройки
При попытке зайти в web-интерфейс впервые, браузер спросит логин и пароль,
который указывался в env-файле.
При необходимости можно настроить дополнительные параметры по кнопке с шестеренками в верхнем
правом углу экрана.
<img alt="Скриншот настроек" width="400" src="../screenshots/Screenshot_Settings.png"/>
### Создание сервисов
Сначала нужно создать сервисы, находящиеся в игре. Если не сделать этого, то никакие стримы не будут сохраняться!
Для этого вызывается диалоговое окно по нажатию кнопки `+` в навбаре,
где можно указать название и порт сервиса, а также дополнительные опции.
<img alt="Скриншот окна создания сервиса" src="../screenshots/Screenshot_Service.png" width="600"/>
#### Параметры сервиса:
1. Имя
2. Порт (если сервис использует несколько портов, нужно создать по сервису на каждый порт)
3. Chunked transfer encoding: автоматически раскодировать [подобные](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Transfer-Encoding#chunked_encoding) http пакеты
4. Urldecode: автоматически проводить urldecode пакетов. Стоит включать по умолчанию в http сервисах.
5. Merge adjacent packets: автоматически склеивать соседние пакеты в одном направлении. Стоит включать по умолчанию в небинарных сервисах.
6. Inflate WebSockets: автоматически разархивировать [сжатые](https://www.rfc-editor.org/rfc/rfc7692) websocket-пакеты.
7. Decrypt TLS: автоматически расшифровывать TLS-трафик (HTTPS).
Работает только с типами шифрования TLS_RSA_WITH_AES_*, и при наличии приватного ключа, который использовался в сертификате сервера (как Wireshark).
### Создание паттернов
Для удобного отлова эксплоитов в приложении существует система паттернов.
Чтобы создать паттерн, нужно открыть выпадающее меню `Patterns` и нажать кнопку `+`,
затем указать параметры паттерна и сохранить.
Важно: паттерн начнет действовать только на стримы, захваченные после его создания. Но можно использовать Lookback, чтобы проанализировать и стримы в прошлом.
<img alt="Скриншот окна создания паттерна" src="../screenshots/Screenshot_Pattern.png" width="400"/>
#### Параметры паттерна:
1. Имя: оно отображается в списке на стримах, которые содержат этот паттерн.
2. Паттерн: само содержимое паттерна. Может быть строкой, регулярным выражением или hex-строкой в зависимости от типа паттерна.
3. Действие паттерна
1. Highlight подсветит найденный паттерн. Пример: поиск флагов.
2. Ignore удалит стрим, содержащий этот паттерн.
Пример: вы запатчили сервис от определенной уязвимости и больше не хотите видеть определенный эксплоит в трафике. Можно добавить этот эксплоит как паттерн с типом IGNORE, и он больше не будет сохраняться.
4. Цвет: этим цветом будут подсвечиваться паттерны с типом Highlight
5. Метод поиска: подстрока, регулярное выражение, бинарная подстрока
6. Направление поиска: везде, только в запросах, только в ответах
7. Сервис: искать в трафике всех сервисов или в каком-то конкретном
### Начало игры
В режиме LIVE система начнет автоматически захватывать стримы и отображать их в сайдбаре.
В режиме FILE для начала обработки файла нужно нажать соответствующую кнопку в сайдбаре.
При нажатии на стрим в главном окне выводится список пакетов;
между бинарным и текстовым представлением можно переключиться по кнопке в сайдбаре.
### Обзор навбара
![Скриншот навбара](../screenshots/Screenshot_Navbar.png)
1. Заголовок
2. Счетчик SPM - Streams Per Minute, стримов в минуту
3. Счетчик PPS - Packets Per Stream, среднее количество пакетов в стриме
4. Кнопка открытия списка паттернов
5. Список сервисов. В каждом сервисе:
1. Название
2. Порт
3. Счетчик SPM сервиса - позволяет определить наиболее популярные сервисы
4. Кнопка редактирования сервиса
6. Кнопка добавления нового сервиса
7. Кнопка открытия настроек
### Обзор сайдбара
![Скриншот сайдбара](../screenshots/Screenshot_Sidebar.png)
В левой панели Packmate находятся стримы выбранного сервиса.
Отображается номер стрима, протокол, TTL, сервис, время, хэш User-Agent (для http сервисов) и найденные паттерны.
Совет: иногда на CTF забывают перезаписать TTL пакетов внутри сети. В таком случае по TTL можно отличить запросы от чекеров и от других команд.
Совет #&#8203;2: по User-Agent можно отличать запросы из разных источников. К примеру, можно предположить, что на скриншоте выше запросы 4 и 5 пришли из разных источников.
Совет #&#8203;3: нажимайте на звездочку, чтобы добавить интересный стрим в избранное. Этот стрим будет выделен в списке, и появится в списке избранных стримов.
#### Управление просмотром
<img alt="Панель управления" src="../screenshots/Screenshot_Sidebar_header.png" width="400"/>
1. Пауза: Остановить/возобновить показ новых стримов на экране. Не останавливает перехват стримов и показ другим пользователям! Полезно, если стримы летят слишком быстро.
2. Избранные: показать только стримы, отмеченные как избранные
3. Переключить текстовый/hexdump вид
4. Начать анализ: появляется только при запуске в режиме `FILE`
5. Промотать список стримов до самого нового
### Обзор меню паттернов
![Список паттернов](../screenshots/Screenshot_Patterns.png)
1. Кнопка добавления паттерна
2. Выбор всех стримов (убрать фильтрацию по паттерну)
3. Список паттернов. В каждой строчке:
1. Описание паттерна
2. Кнопка Lookback - позволяет применить паттерн к стримам, обработанным до создания паттерна.
3. Пауза - паттерн нельзя удалить, но можно поставить на паузу. После этого он не будет применяться к новым стримам.
Совет: создавайте отдельные паттерны для входящих и исходящих флагов. Так легче отличать чекер, кладущий флаги, от эксплоитов.
Совет #&#8203;2: используйте Lookback для исследования найденных эксплоитов.
Пример: вы обнаружили, что сервис только что отдал флаг пользователю `abc123` без видимых причин.
Можно предположить, что атакующая команда создала этого пользователя и подготовила эксплоит в другом стриме.
Но в игре слишком много трафика, чтобы найти этот стрим вручную.
Тогда можно создать `SUBSTRING` паттерн со значением `abc123` и активировать Lookback на несколько минут назад.
После этого со включенным фильтром по паттерну будут отображаться только стримы, где упоминался этот пользователь.
### Горячие клавиши
Для быстрой навигации по стримам можно использовать следующие горячие клавиши:
* `Ctrl+Up` -- переместиться на один стрим выше
* `Ctrl+Down` -- переместиться на один стрим ниже
* `Ctrl+Home` -- перейти на последний стрим
* `Ctrl+End` -- перейти на первый стрим

110
docs/USAGE_EN.md Normal file
View File

@@ -0,0 +1,110 @@
## Usage
### Settings
When attempting to access the web interface for the first time, your browser will prompt for a login and password, which were specified in the env file.
If necessary, additional parameters can be configured via the gear icon in the top right corner of the screen.
<img alt="Screenshot of settings" src="../screenshots/Screenshot_Settings.png" width="400"/>
### Creating Services
First, you need to create services that are present in the game. If you don't do this, no streams will be saved!
To do this, a dialog box is called by clicking the `+` button in the navbar,
where you can specify the name and port of the service, as well as additional options.
<img alt="Screenshot of service creation window" src="../screenshots/Screenshot_Service.png" width="600"/>
#### Service Parameters:
1. Name
2. Port (if the service uses multiple ports, you need to create a Packmate service for each port)
3. Chunked transfer encoding: automatically decode [chunked](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Transfer-Encoding#chunked_encoding) HTTP packets
4. Urldecode: automatically perform URL decoding of packets. Should be enabled by default for HTTP services.
5. Merge adjacent packets: automatically merge adjacent packets in the same direction. Should be enabled by default for non-binary services.
6. Inflate WebSockets: automatically decompress [compressed](https://www.rfc-editor.org/rfc/rfc7692) WebSocket packets.
7. Decrypt TLS: automatically decrypt TLS traffic (HTTPS). Only works with TLS_RSA_WITH_AES_* cipher suites and requires the private key used in the server's certificate (just like Wireshark).
### Creating Patterns
To conveniently capture exploits in the application, a pattern system exists.
To create a pattern, open the dropdown menu `Patterns` and click the `+` button,
then specify the pattern parameters and save.
Important: the pattern will only apply to streams captured after its creation. But you can use Lookback to analyze past streams.
<img alt="Screenshot of pattern creation window" src="../screenshots/Screenshot_Pattern.png" width="400"/>
#### Pattern Parameters:
1. Name: it will be displayed in the list on streams that contain this pattern.
2. Pattern: the content of the pattern itself. It can be a string, a regular expression, or a hex string depending on the pattern type.
3. Pattern ation:
1. Highlight will highlight the found pattern. Example: searching for flags.
2. Ignore will delete the stream containing this pattern.
Example: you patched a service from a certain vulnerability and no longer want to see a specific exploit in the traffic. You can add this exploit as a pattern with IGNORE type, and it will no longer be saved.
4. Color: the color with which patterns of Highlight type will be highlighted.
5. Search method: substring, regular expression, binary substring
6. Search type: everywhere, only in requests, only in responses
7. Service: search in the traffic of all services or in a specific one.
### Game Start
In LIVE mode, the system will automatically capture streams and display them in the sidebar.
In FILE mode, click the corresponding button in the sidebar to start processing a file.
When you click on a stream in the main window, a list of packets is displayed;
you can switch between binary and text representation using the button in the sidebar.
### Navbar Overview
![Navbar screenshot](../screenshots/Screenshot_Navbar.png)
1. Title
2. SPM counter - Streams Per Minute
3. PPS counter - (average number of) Packets Per Stream
4. Button to open the list of patterns
5. List of services. In each service:
1. Name
2. Port
3. SPM counter for the service - allows you to determine the most popular services
4. Service edit button
6. Button to add a new service
7. Button to open settings
### Sidebar Overview
![Sidebar screenshot](../screenshots/Screenshot_Sidebar.png)
Tip: Sometimes during CTFs, admins forget to overwrite the TTL of packets inside the network. In such cases, you can differentiate requests from checkers and other teams based on TTL.
Tip #&#8203;2: User-Agent can be used to differentiate requests from different sources. For example, in the screenshot above, requests 4 and 5 may have come from different sources.
Tip #&#8203;3: Click on the star icon to add an interesting stream to your favorites. This stream will be highlighted in the list and will appear in the list of favorite streams.
#### Control Panel
<img alt="Control Panel" src="../screenshots/Screenshot_Sidebar_header.png" width="400"/>
1. Pause: Stop/resume displaying new streams on the screen. It does not stop intercepting streams or showing them to other users! Useful if streams are flying by too quickly.
2. Favorites: Show only streams marked as favorites.
3. Switch text/hexdump view.
4. Start analysis: Only appears when running in `FILE` mode.
5. Scroll stream list to the newest.
### Pattern Menu Overview
![Pattern List](../screenshots/Screenshot_Patterns.png)
1. Add Pattern Button
2. Select All Streams (do not filter by pattern)
3. Pattern List. Each line contains:
1. Pattern Description
2. Lookback Button - applies the pattern to streams processed before the pattern creation.
3. Pause - pattern cannot be deleted, but can be paused. It will not be applied to new streams after pausing.
Tip: Create separate patterns for incoming and outgoing flags to easily distinguish between flag checkers and exploits.
Tip #&#8203;2: Use Lookback to investigate discovered exploits.
Example: You found that the service just handed out a flag to user `abc123` without an apparent reason.
You can assume that the attacking team created this user and prepared an exploit in another stream.
But there is too much traffic in the game to manually find this stream.
Then you can create a `SUBSTRING` pattern with the value `abc123` and activate Lookback for a few minutes back.
After that, with the pattern filter enabled, only streams mentioning this user will be displayed.
### Hotkeys
Use the following hotkeys for quick navigation through streams:
* `Ctrl+Up` -- Move one stream up.
* `Ctrl+Down` -- Move one stream down.
* `Ctrl+Home` -- Go to the last stream.
* `Ctrl+End` -- Go to the first stream.

View File

@@ -1,5 +1,5 @@
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-7.3.3-bin.zip
distributionUrl=https\://services.gradle.org/distributions/gradle-8.1-bin.zip
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists

BIN
pcaps/dump.pcap Normal file

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 103 KiB

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 171 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

View File

@@ -3,4 +3,4 @@ pluginManagement {
gradlePluginPortal()
}
}
rootProject.name = 'packmate'
rootProject.name = "packmate"

View File

@@ -1,28 +1,35 @@
package ru.serega6531.packmate.configuration;
import org.modelmapper.Converter;
import org.modelmapper.ModelMapper;
import org.modelmapper.TypeMap;
import org.pcap4j.core.PcapNativeException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.context.properties.ConfigurationPropertiesScan;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.scheduling.annotation.EnableAsync;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;
import org.springframework.security.crypto.password.PasswordEncoder;
import ru.serega6531.packmate.model.enums.CaptureMode;
import ru.serega6531.packmate.model.Pattern;
import ru.serega6531.packmate.model.Stream;
import ru.serega6531.packmate.model.pojo.StreamDto;
import ru.serega6531.packmate.pcap.FilePcapWorker;
import ru.serega6531.packmate.pcap.LivePcapWorker;
import ru.serega6531.packmate.pcap.NoOpPcapWorker;
import ru.serega6531.packmate.pcap.PcapWorker;
import ru.serega6531.packmate.properties.PackmateProperties;
import ru.serega6531.packmate.service.ServicesService;
import ru.serega6531.packmate.service.StreamService;
import ru.serega6531.packmate.service.SubscriptionService;
import java.net.UnknownHostException;
import java.util.Set;
import java.util.stream.Collectors;
@Configuration
@EnableScheduling
@EnableAsync
@ConfigurationPropertiesScan("ru.serega6531.packmate.properties")
public class ApplicationConfiguration {
@Bean(destroyMethod = "stop")
@@ -30,20 +37,37 @@ public class ApplicationConfiguration {
public PcapWorker pcapWorker(ServicesService servicesService,
StreamService streamService,
SubscriptionService subscriptionService,
@Value("${local-ip}") String localIpString,
@Value("${interface-name}") String interfaceName,
@Value("${pcap-file}") String filename,
@Value("${capture-mode}") CaptureMode captureMode) throws PcapNativeException, UnknownHostException {
return switch (captureMode) {
case LIVE -> new LivePcapWorker(servicesService, streamService, localIpString, interfaceName);
case FILE -> new FilePcapWorker(servicesService, streamService, subscriptionService, localIpString, filename);
PackmateProperties properties
) throws PcapNativeException, UnknownHostException {
return switch (properties.captureMode()) {
case LIVE -> new LivePcapWorker(servicesService, streamService, properties.localIp(), properties.interfaceName());
case FILE ->
new FilePcapWorker(servicesService, streamService, subscriptionService, properties.localIp(), properties.pcapFile());
case VIEW -> new NoOpPcapWorker();
};
}
@Bean
public PasswordEncoder passwordEncoder() {
return new BCryptPasswordEncoder();
public ModelMapper modelMapper() {
ModelMapper modelMapper = new ModelMapper();
addStreamMapper(modelMapper);
return modelMapper;
}
private void addStreamMapper(ModelMapper modelMapper) {
TypeMap<Stream, StreamDto> streamMapper = modelMapper.createTypeMap(Stream.class, StreamDto.class);
Converter<Set<Pattern>, Set<Integer>> patternSetToIdSet = ctx -> ctx.getSource()
.stream()
.map(Pattern::getId)
.collect(Collectors.toSet());
streamMapper.addMappings(mapping ->
mapping.using(patternSetToIdSet)
.map(Stream::getFoundPatterns, StreamDto::setFoundPatternsIds)
);
}
}

View File

@@ -1,57 +1,58 @@
package ru.serega6531.packmate.configuration;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.event.EventListener;
import org.springframework.security.authentication.event.AuthenticationFailureBadCredentialsEvent;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
import org.springframework.security.core.userdetails.User;
import org.springframework.security.core.userdetails.UserDetails;
import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;
import org.springframework.security.crypto.password.PasswordEncoder;
import org.springframework.security.provisioning.InMemoryUserDetailsManager;
import org.springframework.security.web.SecurityFilterChain;
import ru.serega6531.packmate.properties.PackmateProperties;
@Configuration
@EnableWebSecurity
@Slf4j
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {
public class SecurityConfiguration {
@Value("${account-login}")
private String login;
@Bean
public InMemoryUserDetailsManager userDetailsService(PackmateProperties properties, PasswordEncoder passwordEncoder) {
UserDetails user = User.builder()
.username(properties.web().accountLogin())
.password(passwordEncoder.encode(properties.web().accountPassword()))
.roles("USER")
.build();
@Value("${account-password}")
private String password;
private final PasswordEncoder passwordEncoder;
@Autowired
public SecurityConfiguration(PasswordEncoder passwordEncoder) {
this.passwordEncoder = passwordEncoder;
return new InMemoryUserDetailsManager(user);
}
@Autowired
public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
auth.inMemoryAuthentication()
.withUser(login)
.password(passwordEncoder.encode(password))
.authorities("ROLE_USER");
}
@Override
protected void configure(HttpSecurity http) throws Exception {
http.csrf()
@Bean
public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
return http.csrf()
.disable()
.authorizeRequests()
.antMatchers("/site.webmanifest")
.authorizeHttpRequests((auth) ->
auth.requestMatchers("/site.webmanifest")
.permitAll()
.anyRequest().authenticated()
.and()
.anyRequest()
.authenticated()
)
.httpBasic()
.and()
.headers()
.frameOptions()
.sameOrigin();
.sameOrigin()
.and()
.build();
}
@Bean
public PasswordEncoder passwordEncoder() {
return new BCryptPasswordEncoder();
}
@EventListener

View File

@@ -1,14 +1,16 @@
package ru.serega6531.packmate.controller;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
import ru.serega6531.packmate.model.Packet;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import ru.serega6531.packmate.model.pojo.PacketDto;
import ru.serega6531.packmate.model.pojo.PacketPagination;
import ru.serega6531.packmate.service.StreamService;
import java.util.List;
import java.util.stream.Collectors;
@RestController
@RequestMapping("/api/packet/")
@@ -23,10 +25,7 @@ public class PacketController {
@PostMapping("/{streamId}")
public List<PacketDto> getPacketsForStream(@PathVariable long streamId, @RequestBody PacketPagination pagination) {
List<Packet> packets = streamService.getPackets(streamId, pagination.getStartingFrom(), pagination.getPageSize());
return packets.stream()
.map(streamService::packetToDto)
.collect(Collectors.toList());
return streamService.getPackets(streamId, pagination.getStartingFrom(), pagination.getPageSize());
}
}

View File

@@ -1,13 +1,20 @@
package ru.serega6531.packmate.controller;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
import ru.serega6531.packmate.model.Pattern;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import ru.serega6531.packmate.model.pojo.PatternCreateDto;
import ru.serega6531.packmate.model.pojo.PatternDto;
import ru.serega6531.packmate.model.pojo.PatternUpdateDto;
import ru.serega6531.packmate.service.PatternService;
import java.util.List;
import java.util.stream.Collectors;
@RestController
@RequestMapping("/api/pattern/")
@@ -24,14 +31,19 @@ public class PatternController {
public List<PatternDto> getPatterns() {
return service.findAll()
.stream().map(service::toDto)
.collect(Collectors.toList());
.toList();
}
@PostMapping("/{id}")
@PostMapping("/{id}/enable")
public void enable(@PathVariable int id, @RequestParam boolean enabled) {
service.enable(id, enabled);
}
@DeleteMapping("/{id}")
public void delete(@PathVariable int id) {
service.delete(id);
}
@PostMapping("/{id}/lookback")
public void lookBack(@PathVariable int id, @RequestBody int minutes) {
if (minutes < 1) {
@@ -42,11 +54,13 @@ public class PatternController {
}
@PostMapping
public PatternDto addPattern(@RequestBody PatternDto dto) {
dto.setEnabled(true);
Pattern pattern = service.fromDto(dto);
Pattern saved = service.save(pattern);
return service.toDto(saved);
public PatternDto addPattern(@RequestBody PatternCreateDto dto) {
return service.create(dto);
}
@PostMapping("/{id}")
public PatternDto updatePattern(@PathVariable int id, @RequestBody PatternUpdateDto dto) {
return service.update(id, dto);
}
}

View File

@@ -1,13 +1,19 @@
package ru.serega6531.packmate.controller;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
import ru.serega6531.packmate.model.CtfService;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import ru.serega6531.packmate.model.pojo.ServiceCreateDto;
import ru.serega6531.packmate.model.pojo.ServiceDto;
import ru.serega6531.packmate.model.pojo.ServiceUpdateDto;
import ru.serega6531.packmate.service.ServicesService;
import java.util.List;
import java.util.stream.Collectors;
@RestController
@RequestMapping("/api/service/")
@@ -22,9 +28,7 @@ public class ServiceController {
@GetMapping
public List<ServiceDto> getServices() {
return service.findAll().stream()
.map(service::toDto)
.collect(Collectors.toList());
return service.findAll();
}
@DeleteMapping("/{port}")
@@ -33,9 +37,13 @@ public class ServiceController {
}
@PostMapping
public CtfService addService(@RequestBody ServiceDto dto) {
CtfService newService = this.service.fromDto(dto);
return this.service.save(newService);
public ServiceDto addService(@RequestBody ServiceCreateDto dto) {
return this.service.create(dto);
}
@PostMapping("/{port}")
public ServiceDto updateService(@PathVariable int port, @RequestBody ServiceUpdateDto dto) {
return this.service.update(port, dto);
}
}

View File

@@ -1,14 +1,17 @@
package ru.serega6531.packmate.controller;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
import ru.serega6531.packmate.model.pojo.StreamPagination;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import ru.serega6531.packmate.model.pojo.StreamDto;
import ru.serega6531.packmate.model.pojo.StreamPagination;
import ru.serega6531.packmate.service.StreamService;
import java.util.List;
import java.util.Optional;
import java.util.stream.Collectors;
@RestController
@RequestMapping("/api/stream/")
@@ -23,16 +26,12 @@ public class StreamController {
@PostMapping("/all")
public List<StreamDto> getStreams(@RequestBody StreamPagination pagination) {
return service.findAll(pagination, Optional.empty(), pagination.isFavorites()).stream()
.map(service::streamToDto)
.collect(Collectors.toList());
return service.findAll(pagination, Optional.empty(), pagination.isFavorites());
}
@PostMapping("/{port}")
public List<StreamDto> getStreams(@PathVariable int port, @RequestBody StreamPagination pagination) {
return service.findAll(pagination, Optional.of(port), pagination.isFavorites()).stream()
.map(service::streamToDto)
.collect(Collectors.toList());
return service.findAll(pagination, Optional.of(port), pagination.isFavorites());
}
@PostMapping("/{id}/favorite")

View File

@@ -0,0 +1,15 @@
package ru.serega6531.packmate.exception;
import lombok.Data;
import lombok.EqualsAndHashCode;
import java.io.File;
@EqualsAndHashCode(callSuper = true)
@Data
public class PcapFileNotFoundException extends RuntimeException {
private final File file;
private final File directory;
}

View File

@@ -0,0 +1,15 @@
package ru.serega6531.packmate.exception;
import lombok.Data;
import lombok.EqualsAndHashCode;
import java.util.List;
@EqualsAndHashCode(callSuper = true)
@Data
public class PcapInterfaceNotFoundException extends RuntimeException {
private final String requestedInterface;
private final List<String> existingInterfaces;
}

View File

@@ -0,0 +1,42 @@
package ru.serega6531.packmate.exception.analyzer;
import org.springframework.boot.diagnostics.AbstractFailureAnalyzer;
import org.springframework.boot.diagnostics.FailureAnalysis;
import ru.serega6531.packmate.exception.PcapFileNotFoundException;
import java.io.File;
import java.util.Arrays;
import java.util.List;
public class PcapFileNotFoundFailureAnalyzer extends AbstractFailureAnalyzer<PcapFileNotFoundException> {
@Override
protected FailureAnalysis analyze(Throwable rootFailure, PcapFileNotFoundException cause) {
String description = "The file " + cause.getFile().getAbsolutePath() + " was not found";
String existingFilesMessage;
File[] existingFiles = cause.getDirectory().listFiles();
if (existingFiles == null) {
return new FailureAnalysis(
description,
"Make sure you've put the pcap file to the ./pcaps directory, not the root directory. " +
"The directory currently does not exist",
cause
);
}
if (existingFiles.length == 0) {
existingFilesMessage = "The pcaps directory is currently empty";
} else {
List<String> existingFilesNames = Arrays.stream(existingFiles).map(File::getName).toList();
existingFilesMessage = "The files present in " + cause.getDirectory().getAbsolutePath() + " are: " + existingFilesNames;
}
return new FailureAnalysis(
description,
"Please verify the file name. Make sure you've put the pcap file to the ./pcaps directory, not the root directory.\n" +
existingFilesMessage,
cause
);
}
}

View File

@@ -0,0 +1,16 @@
package ru.serega6531.packmate.exception.analyzer;
import org.springframework.boot.diagnostics.AbstractFailureAnalyzer;
import org.springframework.boot.diagnostics.FailureAnalysis;
import ru.serega6531.packmate.exception.PcapInterfaceNotFoundException;
public class PcapInterfaceNotFoundFailureAnalyzer extends AbstractFailureAnalyzer<PcapInterfaceNotFoundException> {
@Override
protected FailureAnalysis analyze(Throwable rootFailure, PcapInterfaceNotFoundException cause) {
return new FailureAnalysis(
"The interface \"" + cause.getRequestedInterface() + "\" was not found",
"Check the interface name in the config. Existing interfaces are: " + cause.getExistingInterfaces(),
cause
);
}
}

View File

@@ -3,10 +3,10 @@ package ru.serega6531.packmate.model;
import lombok.*;
import org.hibernate.Hibernate;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.Table;
import jakarta.persistence.Column;
import jakarta.persistence.Entity;
import jakarta.persistence.Id;
import jakarta.persistence.Table;
import java.util.Objects;
@Getter
@@ -25,9 +25,7 @@ public class CtfService {
private boolean decryptTls;
private boolean processChunkedEncoding;
private boolean ungzipHttp;
private boolean http;
private boolean urldecodeHttpRequests;

View File

@@ -5,7 +5,7 @@ import org.hibernate.Hibernate;
import org.hibernate.annotations.GenericGenerator;
import org.hibernate.annotations.Parameter;
import javax.persistence.*;
import jakarta.persistence.*;
import java.util.Objects;
@Entity

View File

@@ -5,7 +5,7 @@ import org.hibernate.Hibernate;
import org.hibernate.annotations.GenericGenerator;
import org.hibernate.annotations.Parameter;
import javax.persistence.*;
import jakarta.persistence.*;
import java.util.Objects;
import java.util.Set;
@@ -24,7 +24,7 @@ import java.util.Set;
}
)
@AllArgsConstructor
@Builder
@Builder(toBuilder = true)
@Table(indexes = { @Index(name = "stream_id_index", columnList = "stream_id") })
public class Packet {
@@ -49,11 +49,13 @@ public class Packet {
private boolean incoming; // true если от клиента к серверу, иначе false
private boolean ungzipped;
private boolean httpProcessed = false;
private boolean webSocketParsed;
private boolean webSocketParsed = false;
private boolean tlsDecrypted;
private boolean tlsDecrypted = false;
private boolean hasHttpBody = false;
@Column(nullable = false)
private byte[] content;

View File

@@ -1,5 +1,10 @@
package ru.serega6531.packmate.model;
import jakarta.persistence.Column;
import jakarta.persistence.Entity;
import jakarta.persistence.Enumerated;
import jakarta.persistence.GeneratedValue;
import jakarta.persistence.Id;
import lombok.Getter;
import lombok.RequiredArgsConstructor;
import lombok.Setter;
@@ -11,14 +16,13 @@ import ru.serega6531.packmate.model.enums.PatternActionType;
import ru.serega6531.packmate.model.enums.PatternDirectionType;
import ru.serega6531.packmate.model.enums.PatternSearchType;
import javax.persistence.*;
import java.util.Objects;
@Getter
@Setter
@RequiredArgsConstructor
@ToString
@Entity
@Entity(name = "pattern")
@GenericGenerator(
name = "pattern_generator",
strategy = "org.hibernate.id.enhanced.SequenceStyleGenerator",
@@ -34,8 +38,12 @@ public class Pattern {
@GeneratedValue(generator = "pattern_generator")
private Integer id;
@Column(nullable = false)
private boolean enabled;
@Column(nullable = false)
private boolean deleted = false;
@Column(nullable = false)
private String name;

View File

@@ -9,7 +9,7 @@ import org.hibernate.annotations.GenericGenerator;
import org.hibernate.annotations.Parameter;
import ru.serega6531.packmate.model.enums.Protocol;
import javax.persistence.*;
import jakarta.persistence.*;
import java.util.HashSet;
import java.util.List;
import java.util.Objects;
@@ -53,7 +53,7 @@ public class Stream {
private long endTimestamp;
@ManyToMany(fetch = FetchType.EAGER)
@ManyToMany
@JoinTable(
name = "stream_found_patterns",
joinColumns = @JoinColumn(name = "stream_id"),
@@ -70,6 +70,12 @@ public class Stream {
@Column(columnDefinition = "char(3)")
private String userAgentHash;
@Column(name = "size_bytes", nullable = false)
private Integer sizeBytes;
@Column(name = "packets_count", nullable = false)
private Integer packetsCount;
@Override
public boolean equals(Object o) {
if (this == o) return true;

View File

@@ -2,7 +2,7 @@ package ru.serega6531.packmate.model.enums;
public enum SubscriptionMessageType {
SAVE_SERVICE, SAVE_PATTERN,
DELETE_SERVICE, DELETE_PATTERN,
DELETE_SERVICE,
NEW_STREAM,
FINISH_LOOKBACK,
COUNTERS_UPDATE,

View File

@@ -1,23 +1,8 @@
package ru.serega6531.packmate.model.pojo;
import lombok.Getter;
import java.util.Map;
@Getter
public class CountersHolder {
private final Map<Integer, Integer> servicesPackets;
private final Map<Integer, Integer> servicesStreams;
private final int totalPackets;
private final int totalStreams;
public CountersHolder(Map<Integer, Integer> servicesPackets, Map<Integer, Integer> servicesStreams,
public record CountersHolder(Map<Integer, Integer> servicesPackets, Map<Integer, Integer> servicesStreams,
int totalPackets, int totalStreams) {
this.servicesPackets = servicesPackets;
this.servicesStreams = servicesStreams;
this.totalPackets = totalPackets;
this.totalStreams = totalStreams;
}
}

View File

@@ -14,6 +14,7 @@ public class PacketDto {
private boolean ungzipped;
private boolean webSocketParsed;
private boolean tlsDecrypted;
private boolean hasHttpBody;
private byte[] content;
}

View File

@@ -0,0 +1,19 @@
package ru.serega6531.packmate.model.pojo;
import lombok.Data;
import ru.serega6531.packmate.model.enums.PatternActionType;
import ru.serega6531.packmate.model.enums.PatternDirectionType;
import ru.serega6531.packmate.model.enums.PatternSearchType;
@Data
public class PatternCreateDto {
private String name;
private String value;
private String color;
private PatternSearchType searchType;
private PatternDirectionType directionType;
private PatternActionType actionType;
private Integer serviceId;
}

View File

@@ -10,6 +10,7 @@ public class PatternDto {
private int id;
private boolean enabled;
private boolean deleted;
private String name;
private String value;
private String color;

View File

@@ -0,0 +1,11 @@
package ru.serega6531.packmate.model.pojo;
import lombok.Data;
@Data
public class PatternUpdateDto {
private String name;
private String color;
}

View File

@@ -0,0 +1,16 @@
package ru.serega6531.packmate.model.pojo;
import lombok.Data;
@Data
public class ServiceCreateDto {
private int port;
private String name;
private boolean decryptTls;
private boolean http;
private boolean urldecodeHttpRequests;
private boolean mergeAdjacentPackets;
private boolean parseWebSockets;
}

View File

@@ -8,8 +8,7 @@ public class ServiceDto {
private int port;
private String name;
private boolean decryptTls;
private boolean processChunkedEncoding;
private boolean ungzipHttp;
private boolean http;
private boolean urldecodeHttpRequests;
private boolean mergeAdjacentPackets;
private boolean parseWebSockets;

View File

@@ -0,0 +1,16 @@
package ru.serega6531.packmate.model.pojo;
import lombok.Data;
@Data
public class ServiceUpdateDto {
private int port;
private String name;
private boolean decryptTls;
private boolean http;
private boolean urldecodeHttpRequests;
private boolean mergeAdjacentPackets;
private boolean parseWebSockets;
}

View File

@@ -13,9 +13,11 @@ public class StreamDto {
private Protocol protocol;
private long startTimestamp;
private long endTimestamp;
private Set<PatternDto> foundPatterns;
private Set<Integer> foundPatternsIds;
private boolean favorite;
private int ttl;
private String userAgentHash;
private int sizeBytes;
private int packetsCount;
}

View File

@@ -1,29 +1,18 @@
package ru.serega6531.packmate.model.pojo;
import lombok.AllArgsConstructor;
import lombok.Getter;
import ru.serega6531.packmate.model.enums.Protocol;
import java.net.InetAddress;
@AllArgsConstructor
@Getter
public class UnfinishedStream {
private final InetAddress firstIp;
private final InetAddress secondIp;
private final int firstPort;
private final int secondPort;
private final Protocol protocol;
public record UnfinishedStream(InetAddress firstIp, InetAddress secondIp, int firstPort, int secondPort,
Protocol protocol) {
@Override
public boolean equals(Object obj) {
if (!(obj instanceof UnfinishedStream)) {
if (!(obj instanceof UnfinishedStream o)) {
return false;
}
UnfinishedStream o = (UnfinishedStream) obj;
boolean ipEq1 = firstIp.equals(o.firstIp) && secondIp.equals(o.secondIp);
boolean ipEq2 = firstIp.equals(o.secondIp) && secondIp.equals(o.firstIp);
boolean portEq1 = firstPort == o.firstPort && secondPort == o.secondPort;

View File

@@ -52,11 +52,11 @@ public abstract class AbstractPcapWorker implements PcapWorker, PacketListener {
protected AbstractPcapWorker(ServicesService servicesService,
StreamService streamService,
String localIpString) throws UnknownHostException {
InetAddress localIp) throws UnknownHostException {
this.servicesService = servicesService;
this.streamService = streamService;
this.localIp = InetAddress.getByName(localIpString);
this.localIp = localIp;
BasicThreadFactory factory = new BasicThreadFactory.Builder()
.namingPattern("pcap-loop").build();

View File

@@ -6,6 +6,7 @@ import org.apache.tomcat.util.threads.InlineExecutorService;
import org.pcap4j.core.PcapNativeException;
import org.pcap4j.core.Pcaps;
import org.pcap4j.packet.Packet;
import ru.serega6531.packmate.exception.PcapFileNotFoundException;
import ru.serega6531.packmate.model.enums.Protocol;
import ru.serega6531.packmate.model.enums.SubscriptionMessageType;
import ru.serega6531.packmate.model.pojo.SubscriptionMessage;
@@ -15,26 +16,27 @@ import ru.serega6531.packmate.service.SubscriptionService;
import java.io.EOFException;
import java.io.File;
import java.net.InetAddress;
import java.net.UnknownHostException;
@Slf4j
public class FilePcapWorker extends AbstractPcapWorker {
private final File directory = new File("pcaps");
private final SubscriptionService subscriptionService;
private final File file;
public FilePcapWorker(ServicesService servicesService,
StreamService streamService,
SubscriptionService subscriptionService,
String localIpString,
InetAddress localIp,
String filename) throws UnknownHostException {
super(servicesService, streamService, localIpString);
super(servicesService, streamService, localIp);
this.subscriptionService = subscriptionService;
file = new File(filename);
if (!file.exists()) {
throw new IllegalArgumentException("File " + file.getAbsolutePath() + " does not exist");
}
file = new File(directory, filename);
validateFileExists();
processorExecutorService = new InlineExecutorService();
}
@@ -84,4 +86,10 @@ public class FilePcapWorker extends AbstractPcapWorker {
public String getExecutorState() {
return "inline";
}
private void validateFileExists() {
if (!file.exists()) {
throw new PcapFileNotFoundException(file, directory);
}
}
}

View File

@@ -6,10 +6,13 @@ import org.apache.commons.lang3.concurrent.BasicThreadFactory;
import org.pcap4j.core.PcapNativeException;
import org.pcap4j.core.PcapNetworkInterface;
import org.pcap4j.core.Pcaps;
import ru.serega6531.packmate.exception.PcapInterfaceNotFoundException;
import ru.serega6531.packmate.service.ServicesService;
import ru.serega6531.packmate.service.StreamService;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.List;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
@@ -21,14 +24,14 @@ public class LivePcapWorker extends AbstractPcapWorker {
public LivePcapWorker(ServicesService servicesService,
StreamService streamService,
String localIpString,
InetAddress localIp,
String interfaceName) throws PcapNativeException, UnknownHostException {
super(servicesService, streamService, localIpString);
super(servicesService, streamService, localIp);
device = Pcaps.getDevByName(interfaceName);
if(device == null) {
log.info("Existing devices: {}", Pcaps.findAllDevs().stream().map(PcapNetworkInterface::getName).toList());
throw new IllegalArgumentException("Device " + interfaceName + " does not exist");
if (device == null) {
List<String> existingInterfaces = Pcaps.findAllDevs().stream().map(PcapNetworkInterface::getName).toList();
throw new PcapInterfaceNotFoundException(interfaceName, existingInterfaces);
}
BasicThreadFactory factory = new BasicThreadFactory.Builder()

View File

@@ -1,11 +1,10 @@
package ru.serega6531.packmate.pcap;
import org.pcap4j.core.PcapNativeException;
import ru.serega6531.packmate.model.enums.Protocol;
public class NoOpPcapWorker implements PcapWorker {
@Override
public void start() throws PcapNativeException {
public void start() {
}
@Override

View File

@@ -0,0 +1,38 @@
package ru.serega6531.packmate.properties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import ru.serega6531.packmate.model.enums.CaptureMode;
import java.net.InetAddress;
@ConfigurationProperties("packmate")
public record PackmateProperties(
CaptureMode captureMode,
String interfaceName,
String pcapFile,
InetAddress localIp,
WebProperties web,
TimeoutProperties timeout,
CleanupProperties cleanup,
boolean ignoreEmptyPackets
) {
public record WebProperties(
String accountLogin,
String accountPassword
) {}
public record TimeoutProperties(
int udpStreamTimeout,
int tcpStreamTimeout,
int checkInterval
){}
public record CleanupProperties(
boolean enabled,
int threshold,
int interval
){}
}

View File

@@ -1,11 +1,13 @@
package ru.serega6531.packmate.repository;
import org.springframework.data.domain.Pageable;
import org.springframework.data.jpa.repository.*;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.JpaSpecificationExecutor;
import org.springframework.data.jpa.repository.Modifying;
import org.springframework.data.jpa.repository.Query;
import ru.serega6531.packmate.model.Packet;
import ru.serega6531.packmate.model.Stream;
import javax.persistence.QueryHint;
import java.util.List;
public interface StreamRepository extends JpaRepository<Stream, Long>, JpaSpecificationExecutor<Stream> {
@@ -16,13 +18,12 @@ public interface StreamRepository extends JpaRepository<Stream, Long>, JpaSpecif
long deleteByEndTimestampBeforeAndFavoriteIsFalse(long threshold);
@Query("SELECT DISTINCT p FROM Packet p " +
@Query("SELECT p FROM Packet p " +
"LEFT JOIN FETCH p.matches " +
"WHERE p.stream.id = :streamId " +
"AND (:startingFrom IS NULL OR p.id > :startingFrom) " +
"ORDER BY p.id"
)
@QueryHints(@QueryHint(name = org.hibernate.jpa.QueryHints.HINT_PASS_DISTINCT_THROUGH, value = "false"))
List<Packet> getPackets(long streamId, Long startingFrom, Pageable pageable);
}

View File

@@ -1,25 +1,31 @@
package ru.serega6531.packmate.service;
import jakarta.annotation.PostConstruct;
import lombok.extern.slf4j.Slf4j;
import org.modelmapper.ModelMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Lazy;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
import ru.serega6531.packmate.model.CtfService;
import ru.serega6531.packmate.model.FoundPattern;
import ru.serega6531.packmate.model.Pattern;
import ru.serega6531.packmate.model.enums.PatternActionType;
import ru.serega6531.packmate.model.enums.PatternDirectionType;
import ru.serega6531.packmate.model.enums.SubscriptionMessageType;
import ru.serega6531.packmate.model.pojo.PatternCreateDto;
import ru.serega6531.packmate.model.pojo.PatternDto;
import ru.serega6531.packmate.model.pojo.PatternUpdateDto;
import ru.serega6531.packmate.model.pojo.SubscriptionMessage;
import ru.serega6531.packmate.repository.PatternRepository;
import javax.annotation.PostConstruct;
import java.time.Instant;
import java.util.*;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;
@Service
@Slf4j
@@ -59,11 +65,11 @@ public class PatternService {
public Set<FoundPattern> findMatches(byte[] bytes, CtfService service, PatternDirectionType directionType, PatternActionType actionType) {
final List<Pattern> list = patterns.values().stream()
.filter(Pattern::isEnabled)
.filter(p -> p.getServiceId() == null || p.getServiceId() == service.getPort())
.filter(pattern -> pattern.isEnabled() && !pattern.isDeleted())
.filter(p -> p.getServiceId() == null || p.getServiceId().equals(service.getPort()))
.filter(p -> p.getActionType() == actionType)
.filter(p -> p.getDirectionType() == directionType || p.getDirectionType() == PatternDirectionType.BOTH)
.collect(Collectors.toList());
.toList();
return new PatternMatcher(bytes, list).findMatches();
}
@@ -88,15 +94,47 @@ public class PatternService {
}
}
public Pattern save(Pattern pattern) {
public void delete(int id) {
final Pattern pattern = find(id);
if (pattern != null) {
pattern.setDeleted(true);
final Pattern saved = repository.save(pattern);
patterns.put(id, saved);
log.info("Deleted pattern '{}' with value '{}'", pattern.getName(), pattern.getValue());
subscriptionService.broadcast(new SubscriptionMessage(SubscriptionMessageType.SAVE_PATTERN, toDto(saved)));
}
}
@Transactional
public PatternDto create(PatternCreateDto dto) {
Pattern pattern = fromDto(dto);
pattern.setEnabled(true);
pattern.setDeleted(false);
pattern.setSearchStartTimestamp(System.currentTimeMillis());
Pattern saved = save(pattern);
return toDto(saved);
}
@Transactional
public PatternDto update(int id, PatternUpdateDto dto) {
Pattern pattern = repository.findById(id).orElseThrow();
modelMapper.map(dto, pattern);
Pattern saved = save(pattern);
return toDto(saved);
}
private Pattern save(Pattern pattern) {
try {
PatternMatcher.compilePattern(pattern);
} catch (Exception e) {
throw new IllegalArgumentException(e.getMessage());
}
pattern.setSearchStartTimestamp(System.currentTimeMillis());
final Pattern saved = repository.save(pattern);
patterns.put(saved.getId(), saved);
@@ -121,12 +159,11 @@ public class PatternService {
}
}
public Pattern fromDto(PatternDto dto) {
public Pattern fromDto(PatternCreateDto dto) {
return modelMapper.map(dto, Pattern.class);
}
public PatternDto toDto(Pattern pattern) {
return modelMapper.map(pattern, PatternDto.class);
}
}

View File

@@ -4,8 +4,8 @@ import lombok.extern.slf4j.Slf4j;
import org.pcap4j.core.PcapNativeException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import ru.serega6531.packmate.model.CtfService;
import ru.serega6531.packmate.model.enums.SubscriptionMessageType;
import ru.serega6531.packmate.model.pojo.ServiceDto;
import ru.serega6531.packmate.model.pojo.SubscriptionMessage;
import ru.serega6531.packmate.pcap.NoOpPcapWorker;
import ru.serega6531.packmate.pcap.PcapWorker;
@@ -40,14 +40,14 @@ public class PcapService {
}
}
public void updateFilter(Collection<CtfService> services) {
public void updateFilter(Collection<ServiceDto> services) {
String filter;
if (services.isEmpty()) {
filter = "tcp or udp";
} else {
final String ports = services.stream()
.map(CtfService::getPort)
.map(ServiceDto::getPort)
.map(p -> "port " + p)
.collect(Collectors.joining(" or "));

View File

@@ -1,21 +1,26 @@
package ru.serega6531.packmate.service;
import jakarta.annotation.PostConstruct;
import lombok.extern.slf4j.Slf4j;
import org.modelmapper.ModelMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Lazy;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
import ru.serega6531.packmate.properties.PackmateProperties;
import ru.serega6531.packmate.model.CtfService;
import ru.serega6531.packmate.model.enums.SubscriptionMessageType;
import ru.serega6531.packmate.model.pojo.ServiceCreateDto;
import ru.serega6531.packmate.model.pojo.ServiceDto;
import ru.serega6531.packmate.model.pojo.ServiceUpdateDto;
import ru.serega6531.packmate.model.pojo.SubscriptionMessage;
import ru.serega6531.packmate.repository.ServiceRepository;
import javax.annotation.PostConstruct;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.*;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Optional;
@Service
@Slf4j
@@ -35,12 +40,12 @@ public class ServicesService {
SubscriptionService subscriptionService,
@Lazy PcapService pcapService,
ModelMapper modelMapper,
@Value("${local-ip}") String localIpString) throws UnknownHostException {
PackmateProperties properties) {
this.repository = repository;
this.subscriptionService = subscriptionService;
this.pcapService = pcapService;
this.modelMapper = modelMapper;
this.localIp = InetAddress.getByName(localIpString);
this.localIp = properties.localIp();
}
@PostConstruct
@@ -67,8 +72,11 @@ public class ServicesService {
return Optional.ofNullable(services.get(port));
}
public Collection<CtfService> findAll() {
return services.values();
public List<ServiceDto> findAll() {
return services.values()
.stream()
.map(this::toDto)
.toList();
}
public void deleteByPort(int port) {
@@ -82,9 +90,31 @@ public class ServicesService {
updateFilter();
}
public CtfService save(CtfService service) {
log.info("Added or edited service '{}' at port {}", service.getName(), service.getPort());
@Transactional
public ServiceDto create(ServiceCreateDto dto) {
if (repository.existsById(dto.getPort())) {
throw new IllegalArgumentException("Service already exists");
}
CtfService service = fromDto(dto);
log.info("Added service '{}' at port {}", service.getName(), service.getPort());
return save(service);
}
@Transactional
public ServiceDto update(int port, ServiceUpdateDto dto) {
CtfService service = repository.findById(port).orElseThrow();
log.info("Edited service '{}' at port {}", service.getName(), service.getPort());
modelMapper.map(dto, service);
service.setPort(port);
return save(service);
}
private ServiceDto save(CtfService service) {
final CtfService saved = repository.save(service);
services.put(saved.getPort(), saved);
@@ -92,18 +122,18 @@ public class ServicesService {
updateFilter();
return saved;
return toDto(saved);
}
public void updateFilter() {
pcapService.updateFilter(findAll());
}
public ServiceDto toDto(CtfService service) {
private ServiceDto toDto(CtfService service) {
return modelMapper.map(service, ServiceDto.class);
}
public CtfService fromDto(ServiceDto dto) {
private CtfService fromDto(ServiceCreateDto dto) {
return modelMapper.map(dto, CtfService.class);
}

View File

@@ -4,7 +4,6 @@ import lombok.extern.slf4j.Slf4j;
import org.jetbrains.annotations.Nullable;
import org.modelmapper.ModelMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.data.domain.PageRequest;
import org.springframework.data.domain.Pageable;
import org.springframework.data.domain.Sort;
@@ -13,11 +12,20 @@ import org.springframework.scheduling.annotation.Async;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Propagation;
import org.springframework.transaction.annotation.Transactional;
import ru.serega6531.packmate.model.*;
import ru.serega6531.packmate.properties.PackmateProperties;
import ru.serega6531.packmate.model.CtfService;
import ru.serega6531.packmate.model.FoundPattern;
import ru.serega6531.packmate.model.Packet;
import ru.serega6531.packmate.model.Pattern;
import ru.serega6531.packmate.model.Stream;
import ru.serega6531.packmate.model.enums.PatternActionType;
import ru.serega6531.packmate.model.enums.PatternDirectionType;
import ru.serega6531.packmate.model.enums.SubscriptionMessageType;
import ru.serega6531.packmate.model.pojo.*;
import ru.serega6531.packmate.model.pojo.PacketDto;
import ru.serega6531.packmate.model.pojo.StreamDto;
import ru.serega6531.packmate.model.pojo.StreamPagination;
import ru.serega6531.packmate.model.pojo.SubscriptionMessage;
import ru.serega6531.packmate.model.pojo.UnfinishedStream;
import ru.serega6531.packmate.repository.StreamRepository;
import ru.serega6531.packmate.service.optimization.RsaKeysHolder;
import ru.serega6531.packmate.service.optimization.StreamOptimizer;
@@ -28,7 +36,6 @@ import java.util.List;
import java.util.Optional;
import java.util.Set;
import java.util.regex.Matcher;
import java.util.stream.Collectors;
@Service
@Slf4j
@@ -41,7 +48,6 @@ public class StreamService {
private final SubscriptionService subscriptionService;
private final RsaKeysHolder keysHolder;
private final ModelMapper modelMapper;
private final boolean ignoreEmptyPackets;
private final java.util.regex.Pattern userAgentPattern = java.util.regex.Pattern.compile("User-Agent: (.+)\\r\\n");
@@ -54,7 +60,7 @@ public class StreamService {
SubscriptionService subscriptionService,
RsaKeysHolder keysHolder,
ModelMapper modelMapper,
@Value("${ignore-empty-packets}") boolean ignoreEmptyPackets) {
PackmateProperties properties) {
this.repository = repository;
this.patternService = patternService;
this.servicesService = servicesService;
@@ -62,7 +68,7 @@ public class StreamService {
this.subscriptionService = subscriptionService;
this.keysHolder = keysHolder;
this.modelMapper = modelMapper;
this.ignoreEmptyPackets = ignoreEmptyPackets;
this.ignoreEmptyPackets = properties.ignoreEmptyPackets();
}
/**
@@ -71,15 +77,15 @@ public class StreamService {
@Transactional(propagation = Propagation.NEVER)
public boolean saveNewStream(UnfinishedStream unfinishedStream, List<Packet> packets) {
final var serviceOptional = servicesService.findService(
unfinishedStream.getFirstIp(),
unfinishedStream.getFirstPort(),
unfinishedStream.getSecondIp(),
unfinishedStream.getSecondPort()
unfinishedStream.firstIp(),
unfinishedStream.firstPort(),
unfinishedStream.secondIp(),
unfinishedStream.secondPort()
);
if (serviceOptional.isEmpty()) {
log.warn("Failed to save the stream: service at port {} or {} does not exist",
unfinishedStream.getFirstPort(), unfinishedStream.getSecondPort());
unfinishedStream.firstPort(), unfinishedStream.secondPort());
return false;
}
CtfService service = serviceOptional.get();
@@ -95,6 +101,9 @@ public class StreamService {
countingService.countStream(service.getPort(), packets.size());
int packetsSize = packets.stream().mapToInt(p -> p.getContent().length).sum();
int packetsCount = packets.size();
List<Packet> optimizedPackets = new StreamOptimizer(keysHolder, service, packets).optimizeStream();
if (isStreamIgnored(optimizedPackets, service)) {
@@ -107,7 +116,7 @@ public class StreamService {
.findFirst();
final Stream stream = new Stream();
stream.setProtocol(unfinishedStream.getProtocol());
stream.setProtocol(unfinishedStream.protocol());
stream.setTtl(firstIncoming.map(Packet::getTtl).orElse(0));
stream.setStartTimestamp(packets.get(0).getTimestamp());
stream.setEndTimestamp(packets.get(packets.size() - 1).getTimestamp());
@@ -116,6 +125,9 @@ public class StreamService {
String userAgentHash = getUserAgentHash(optimizedPackets);
stream.setUserAgentHash(userAgentHash);
stream.setSizeBytes(packetsSize);
stream.setPacketsCount(packetsCount);
Set<Pattern> foundPatterns = matchPatterns(optimizedPackets, service);
stream.setFoundPatterns(foundPatterns);
stream.setPackets(optimizedPackets);
@@ -190,7 +202,7 @@ public class StreamService {
foundPatterns.addAll(matches.stream()
.map(FoundPattern::getPatternId)
.map(patternService::find)
.collect(Collectors.toList()));
.toList());
}
return foundPatterns;
@@ -244,9 +256,12 @@ public class StreamService {
return saved;
}
public List<Packet> getPackets(long streamId, @Nullable Long startingFrom, int pageSize) {
// long safeStartingFrom = startingFrom != null ? startingFrom : 0;
return repository.getPackets(streamId, startingFrom, Pageable.ofSize(pageSize));
@Transactional
public List<PacketDto> getPackets(long streamId, @Nullable Long startingFrom, int pageSize) {
return repository.getPackets(streamId, startingFrom, Pageable.ofSize(pageSize))
.stream()
.map(this::packetToDto)
.toList();
}
/**
@@ -262,7 +277,8 @@ public class StreamService {
repository.setFavorite(id, favorite);
}
public List<Stream> findAll(StreamPagination pagination, Optional<Integer> service, boolean onlyFavorites) {
@Transactional
public List<StreamDto> findAll(StreamPagination pagination, Optional<Integer> service, boolean onlyFavorites) {
PageRequest page = PageRequest.of(0, pagination.getPageSize(), Sort.Direction.DESC, "id");
Specification<Stream> spec = Specification.where(null);
@@ -283,7 +299,11 @@ public class StreamService {
spec = spec.and(streamPatternsContains(pagination.getPattern()));
}
return repository.findAll(spec, page).getContent();
return repository.findAll(spec, page)
.getContent()
.stream()
.map(this::streamToDto)
.toList();
}
public List<Stream> findAllBetweenTimestamps(long start, long end) {

View File

@@ -1,180 +0,0 @@
package ru.serega6531.packmate.service.optimization;
import com.google.common.primitives.Bytes;
import lombok.RequiredArgsConstructor;
import lombok.SneakyThrows;
import lombok.extern.slf4j.Slf4j;
import ru.serega6531.packmate.model.Packet;
import ru.serega6531.packmate.utils.BytesUtils;
import ru.serega6531.packmate.utils.PacketUtils;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
@Slf4j
@RequiredArgsConstructor
public class HttpChunksProcessor {
private static final String CHUNKED_HTTP_HEADER = "transfer-encoding: chunked\r\n";
private final List<Packet> packets;
private int position;
private boolean chunkStarted = false;
private final List<Packet> chunkPackets = new ArrayList<>();
public void processChunkedEncoding() {
int start = -1;
for (position = 0; position < packets.size(); position++) {
Packet packet = packets.get(position);
if (!packet.isIncoming()) {
String content = packet.getContentString();
boolean http = content.startsWith("HTTP/");
int contentPos = content.indexOf("\r\n\r\n");
if (http && contentPos != -1) { // начало body
String headers = content.substring(0, contentPos + 2); // захватываем первые \r\n
boolean chunked = headers.toLowerCase().contains(CHUNKED_HTTP_HEADER);
if (chunked) {
chunkStarted = true;
start = position;
chunkPackets.add(packet);
checkCompleteChunk(chunkPackets, start);
} else {
chunkStarted = false;
chunkPackets.clear();
}
} else if (chunkStarted) {
chunkPackets.add(packet);
checkCompleteChunk(chunkPackets, start);
}
}
}
}
private void checkCompleteChunk(List<Packet> packets, int start) {
boolean end = Arrays.equals(packets.get(packets.size() - 1).getContent(), "0\r\n\r\n".getBytes()) ||
BytesUtils.endsWith(packets.get(packets.size() - 1).getContent(), "\r\n0\r\n\r\n".getBytes());
if (end) {
processChunk(packets, start);
}
}
@SneakyThrows
private void processChunk(List<Packet> packets, int start) {
//noinspection OptionalGetWithoutIsPresent
final byte[] content = PacketUtils.mergePackets(packets).get();
ByteArrayOutputStream output = new ByteArrayOutputStream(content.length);
final int contentStart = Bytes.indexOf(content, "\r\n\r\n".getBytes()) + 4;
output.write(content, 0, contentStart);
ByteBuffer buf = ByteBuffer.wrap(Arrays.copyOfRange(content, contentStart, content.length));
while (true) {
final int chunkSize = readChunkSize(buf);
switch (chunkSize) {
case -1 -> {
log.warn("Failed to merge chunks, next chunk size not found");
resetChunk();
return;
}
case 0 -> {
buildWholePacket(packets, start, output);
return;
}
default -> {
if (!readChunk(buf, chunkSize, output)) return;
if (!readTrailer(buf)) return;
}
}
}
}
private boolean readChunk(ByteBuffer buf, int chunkSize, ByteArrayOutputStream output) throws IOException {
if (chunkSize > buf.remaining()) {
log.warn("Failed to merge chunks, chunk size too big: {} + {} > {}",
buf.position(), chunkSize, buf.capacity());
resetChunk();
return false;
}
byte[] chunk = new byte[chunkSize];
buf.get(chunk);
output.write(chunk);
return true;
}
private boolean readTrailer(ByteBuffer buf) {
if (buf.remaining() < 2) {
log.warn("Failed to merge chunks, chunk doesn't end with \\r\\n");
resetChunk();
return false;
}
int c1 = buf.get();
int c2 = buf.get();
if (c1 != '\r' || c2 != '\n') {
log.warn("Failed to merge chunks, chunk trailer is not equal to \\r\\n");
resetChunk();
return false;
}
return true;
}
private void buildWholePacket(List<Packet> packets, int start, ByteArrayOutputStream output) {
Packet result = Packet.builder()
.incoming(false)
.timestamp(packets.get(0).getTimestamp())
.ungzipped(false)
.webSocketParsed(false)
.tlsDecrypted(packets.get(0).isTlsDecrypted())
.content(output.toByteArray())
.build();
this.packets.removeAll(packets);
this.packets.add(start, result);
resetChunk();
position = start + 1;
}
private void resetChunk() {
chunkStarted = false;
chunkPackets.clear();
}
private int readChunkSize(ByteBuffer buf) {
StringBuilder sb = new StringBuilder();
while (buf.remaining() > 2) {
byte b = buf.get();
if ((b >= '0' && b <= '9') || (b >= 'a' && b <= 'f')) {
sb.append((char) b);
} else if (b == '\r') {
if (buf.get() == '\n') {
return Integer.parseInt(sb.toString(), 16);
} else {
return -1; // после \r не идет \n
}
} else {
return -1;
}
}
return -1;
}
}

View File

@@ -1,121 +0,0 @@
package ru.serega6531.packmate.service.optimization;
import com.google.common.primitives.Bytes;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.io.IOUtils;
import org.apache.commons.lang3.ArrayUtils;
import ru.serega6531.packmate.model.Packet;
import ru.serega6531.packmate.utils.PacketUtils;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.zip.GZIPInputStream;
import java.util.zip.ZipException;
@Slf4j
@RequiredArgsConstructor
public class HttpGzipProcessor {
private static final String GZIP_HTTP_HEADER = "content-encoding: gzip\r\n";
private static final byte[] GZIP_HEADER = {0x1f, (byte) 0x8b, 0x08};
private final List<Packet> packets;
boolean gzipStarted = false;
private int position;
/**
* Попытаться распаковать GZIP из исходящих http пакетов. <br>
* GZIP поток начинается на найденном HTTP пакете с заголовком Content-Encoding: gzip
* (при этом заголовок HTTP может быть в другом пакете)<br>
* Поток заканчивается при обнаружении нового HTTP заголовка,
* при смене стороны передачи или при окончании всего стрима
*/
public void unpackGzip() {
int gzipStartPacket = 0;
for (position = 0; position < packets.size(); position++) {
Packet packet = packets.get(position);
if (packet.isIncoming() && gzipStarted) { // поток gzip закончился
extractGzip(gzipStartPacket, position - 1);
} else if (!packet.isIncoming()) {
String content = packet.getContentString();
int contentPos = content.indexOf("\r\n\r\n");
boolean http = content.startsWith("HTTP/");
if (http && gzipStarted) { // начался новый http пакет, заканчиваем старый gzip поток
extractGzip(gzipStartPacket, position - 1);
}
if (contentPos != -1) { // начало body
String headers = content.substring(0, contentPos + 2); // захватываем первые \r\n
boolean gziped = headers.toLowerCase().contains(GZIP_HTTP_HEADER);
if (gziped) {
gzipStarted = true;
gzipStartPacket = position;
}
}
}
}
if (gzipStarted) { // стрим закончился gzip пакетом
extractGzip(gzipStartPacket, packets.size() - 1);
}
}
/**
* Попытаться распаковать кусок пакетов с gzip body и вставить результат на их место
*/
private void extractGzip(int gzipStartPacket, int gzipEndPacket) {
List<Packet> cut = packets.subList(gzipStartPacket, gzipEndPacket + 1);
Packet decompressed = decompressGzipPackets(cut);
if (decompressed != null) {
packets.removeAll(cut);
packets.add(gzipStartPacket, decompressed);
gzipStarted = false;
position = gzipStartPacket + 1; // продвигаем указатель на следующий после склеенного блок
}
}
private Packet decompressGzipPackets(List<Packet> cut) {
//noinspection OptionalGetWithoutIsPresent
final byte[] content = PacketUtils.mergePackets(cut).get();
final int gzipStart = Bytes.indexOf(content, GZIP_HEADER);
byte[] httpHeader = Arrays.copyOfRange(content, 0, gzipStart);
byte[] gzipBytes = Arrays.copyOfRange(content, gzipStart, content.length);
try {
final GZIPInputStream gzipStream = new GZIPInputStream(new ByteArrayInputStream(gzipBytes));
ByteArrayOutputStream out = new ByteArrayOutputStream();
IOUtils.copy(gzipStream, out);
byte[] newContent = ArrayUtils.addAll(httpHeader, out.toByteArray());
log.debug("GZIP decompressed: {} -> {} bytes", gzipBytes.length, out.size());
return Packet.builder()
.incoming(false)
.timestamp(cut.get(0).getTimestamp())
.ungzipped(true)
.webSocketParsed(false)
.tlsDecrypted(cut.get(0).isTlsDecrypted())
.content(newContent)
.build();
} catch (ZipException e) {
log.warn("Failed to decompress gzip, leaving as it is: {}", e.getMessage());
} catch (IOException e) {
log.error("decompress gzip", e);
}
return null;
}
}

View File

@@ -0,0 +1,65 @@
package ru.serega6531.packmate.service.optimization;
import lombok.extern.slf4j.Slf4j;
import rawhttp.core.HttpMessage;
import rawhttp.core.RawHttp;
import rawhttp.core.RawHttpOptions;
import rawhttp.core.body.BodyReader;
import rawhttp.core.errors.InvalidHttpHeader;
import rawhttp.core.errors.InvalidHttpRequest;
import rawhttp.core.errors.InvalidHttpResponse;
import rawhttp.core.errors.InvalidMessageFrame;
import rawhttp.core.errors.UnknownEncodingException;
import ru.serega6531.packmate.model.Packet;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.util.List;
import java.util.Optional;
@Slf4j
public class HttpProcessor {
private static final RawHttp rawHttp = new RawHttp(RawHttpOptions.strict());
public void process(List<Packet> packets) {
packets.stream()
.filter(p -> !p.isWebSocketParsed())
.forEach(this::processPacket);
}
private void processPacket(Packet packet) {
try {
ByteArrayInputStream contentStream = new ByteArrayInputStream(packet.getContent());
HttpMessage message;
if (packet.isIncoming()) {
message = rawHttp.parseRequest(contentStream).eagerly();
} else {
message = rawHttp.parseResponse(contentStream).eagerly();
}
packet.setContent(getDecodedMessage(message));
packet.setHasHttpBody(message.getBody().isPresent());
} catch (IOException | InvalidHttpRequest | InvalidHttpResponse | InvalidHttpHeader | InvalidMessageFrame |
UnknownEncodingException e) {
log.warn("Could not parse http packet", e);
}
}
private byte[] getDecodedMessage(HttpMessage message) throws IOException {
ByteArrayOutputStream os = new ByteArrayOutputStream(256);
message.getStartLine().writeTo(os);
message.getHeaders().writeTo(os);
Optional<? extends BodyReader> body = message.getBody();
if (body.isPresent()) {
body.get().writeDecodedTo(os, 256);
}
return os.toByteArray();
}
}

View File

@@ -1,6 +1,5 @@
package ru.serega6531.packmate.service.optimization;
import lombok.AllArgsConstructor;
import lombok.SneakyThrows;
import lombok.extern.slf4j.Slf4j;
import ru.serega6531.packmate.model.Packet;
@@ -9,17 +8,14 @@ import java.net.URLDecoder;
import java.nio.charset.StandardCharsets;
import java.util.List;
@AllArgsConstructor
@Slf4j
public class HttpUrldecodeProcessor {
private final List<Packet> packets;
/**
* Декодирование urlencode с http пакета до смены стороны или окончания стрима
*/
@SneakyThrows
public void urldecodeRequests() {
public void urldecodeRequests(List<Packet> packets) {
boolean httpStarted = false;
for (Packet packet : packets) {

View File

@@ -1,30 +1,25 @@
package ru.serega6531.packmate.service.optimization;
import lombok.AllArgsConstructor;
import ru.serega6531.packmate.model.Packet;
import ru.serega6531.packmate.utils.PacketUtils;
import java.util.List;
@AllArgsConstructor
public class PacketsMerger {
private final List<Packet> packets;
/**
* Сжать соседние пакеты в одном направлении в один.
* Выполняется после других оптимизаций чтобы правильно определять границы пакетов.
* Сжать соседние пакеты в одном направлении в один. Не склеивает WS и не-WS пакеты.
*/
public void mergeAdjacentPackets() {
public void mergeAdjacentPackets(List<Packet> packets) {
int start = 0;
int packetsInRow = 0;
boolean incoming = true;
Packet previous = null;
for (int i = 0; i < packets.size(); i++) {
Packet packet = packets.get(i);
if (packet.isIncoming() != incoming) {
if (previous == null || !shouldBeInSameBatch(packet, previous)) {
if (packetsInRow > 1) {
compress(start, i);
compress(packets, start, i);
i = start + 1; // продвигаем указатель на следующий после склеенного блок
}
@@ -34,36 +29,40 @@ public class PacketsMerger {
packetsInRow++;
}
incoming = packet.isIncoming();
previous = packet;
}
if (packetsInRow > 1) {
compress(start, packets.size());
compress(packets, start, packets.size());
}
}
/**
* Сжать кусок со start по end в один пакет
*/
private void compress(int start, int end) {
private void compress(List<Packet> packets, int start, int end) {
final List<Packet> cut = packets.subList(start, end);
final long timestamp = cut.get(0).getTimestamp();
final boolean ungzipped = cut.stream().anyMatch(Packet::isUngzipped);
final boolean httpProcessed = cut.stream().anyMatch(Packet::isHttpProcessed);
final boolean webSocketParsed = cut.stream().anyMatch(Packet::isWebSocketParsed);
final boolean tlsDecrypted = cut.get(0).isTlsDecrypted();
final boolean incoming = cut.get(0).isIncoming();
//noinspection OptionalGetWithoutIsPresent
final byte[] content = PacketUtils.mergePackets(cut).get();
final byte[] content = PacketUtils.mergePackets(cut);
packets.removeAll(cut);
packets.add(start, Packet.builder()
.incoming(incoming)
.timestamp(timestamp)
.ungzipped(ungzipped)
.httpProcessed(httpProcessed)
.webSocketParsed(webSocketParsed)
.tlsDecrypted(tlsDecrypted)
.content(content)
.build());
}
private boolean shouldBeInSameBatch(Packet p1, Packet p2) {
return p1.isIncoming() == p2.isIncoming() &&
p1.isWebSocketParsed() == p2.isWebSocketParsed();
}
}

View File

@@ -1,7 +1,6 @@
package ru.serega6531.packmate.service.optimization;
import lombok.AllArgsConstructor;
import lombok.SneakyThrows;
import lombok.extern.slf4j.Slf4j;
import ru.serega6531.packmate.model.CtfService;
import ru.serega6531.packmate.model.Packet;
@@ -16,6 +15,11 @@ public class StreamOptimizer {
private final CtfService service;
private List<Packet> packets;
private final PacketsMerger merger = new PacketsMerger();
private final HttpUrldecodeProcessor urldecodeProcessor = new HttpUrldecodeProcessor();
private final HttpProcessor httpProcessor = new HttpProcessor();
/**
* Вызвать для выполнения оптимизаций на переданном списке пакетов.
*/
@@ -29,51 +33,42 @@ public class StreamOptimizer {
}
}
if (service.isProcessChunkedEncoding()) {
try {
processChunkedEncoding();
} catch (Exception e) {
log.warn("Error optimizing stream (chunks)", e);
return packets;
}
}
if (service.isUngzipHttp()) {
try {
unpackGzip();
} catch (Exception e) {
log.warn("Error optimizing stream (gzip)", e);
return packets;
}
}
if (service.isParseWebSockets()) {
try {
parseWebSockets();
} catch (Exception e) {
log.warn("Error optimizing stream (websocketss)", e);
log.warn("Error optimizing stream (websockets)", e);
return packets;
}
}
if (service.isUrldecodeHttpRequests()) {
try {
urldecodeRequests();
urldecodeProcessor.urldecodeRequests(packets);
} catch (Exception e) {
log.warn("Error optimizing stream (urldecode)", e);
return packets;
}
}
if (service.isMergeAdjacentPackets()) {
if (service.isMergeAdjacentPackets() || service.isHttp()) {
try {
mergeAdjacentPackets();
merger.mergeAdjacentPackets(packets);
} catch (Exception e) {
log.warn("Error optimizing stream (adjacent)", e);
return packets;
}
}
if (service.isHttp()) {
try {
httpProcessor.process(packets);
} catch (Exception e) {
log.warn("Error optimizing stream (http)", e);
return packets;
}
}
return packets;
}
@@ -86,44 +81,6 @@ public class StreamOptimizer {
}
}
/**
* Сжать соседние пакеты в одном направлении в один.
* Выполняется после других оптимизаций чтобы правильно определять границы пакетов.
*/
private void mergeAdjacentPackets() {
final PacketsMerger merger = new PacketsMerger(packets);
merger.mergeAdjacentPackets();
}
/**
* Декодирование urlencode с http пакета до смены стороны или окончания стрима
*/
@SneakyThrows
private void urldecodeRequests() {
final HttpUrldecodeProcessor processor = new HttpUrldecodeProcessor(packets);
processor.urldecodeRequests();
}
/**
* <a href="https://ru.wikipedia.org/wiki/Chunked_transfer_encoding">Chunked transfer encoding</a>
*/
private void processChunkedEncoding() {
HttpChunksProcessor processor = new HttpChunksProcessor(packets);
processor.processChunkedEncoding();
}
/**
* Попытаться распаковать GZIP из исходящих http пакетов. <br>
* GZIP поток начинается на найденном HTTP пакете с заголовком Content-Encoding: gzip
* (при этом заголовок HTTP может быть в другом пакете)<br>
* Поток заканчивается при обнаружении нового HTTP заголовка,
* при смене стороны передачи или при окончании всего стрима
*/
private void unpackGzip() {
final HttpGzipProcessor processor = new HttpGzipProcessor(packets);
processor.unpackGzip();
}
private void parseWebSockets() {
if (!packets.get(0).getContentString().contains("HTTP/")) {
return;

View File

@@ -107,7 +107,7 @@ public class TlsDecryptor {
int blockCipherSize = Integer.parseInt(blockCipherParts[1]);
String blockCipherMode = blockCipherParts[2];
if (!blockCipherAlgo.equals("AES")) { //TODO использовать не только AES256
if (!blockCipherAlgo.equals("AES")) {
return;
}
@@ -182,15 +182,12 @@ public class TlsDecryptor {
decoded = clearDecodedData(decoded);
result.add(Packet.builder()
result.add(
packet.toBuilder()
.content(decoded)
.incoming(packet.isIncoming())
.timestamp(packet.getTimestamp())
.ungzipped(false)
.webSocketParsed(false)
.tlsDecrypted(true)
.ttl(packet.getTtl())
.build());
.build()
);
}
}
}

View File

@@ -120,14 +120,9 @@ public class WebSocketsParser {
}
private Packet mimicPacket(Packet packet, byte[] content, boolean ws) {
return Packet.builder()
return packet.toBuilder()
.content(content)
.incoming(packet.isIncoming())
.timestamp(packet.getTimestamp())
.ttl(packet.getTtl())
.ungzipped(packet.isUngzipped())
.webSocketParsed(ws)
.tlsDecrypted(packet.isTlsDecrypted())
.build();
}
@@ -138,8 +133,7 @@ public class WebSocketsParser {
for (List<Packet> side : sides) {
final Packet lastPacket = side.get(0);
//noinspection OptionalGetWithoutIsPresent
final byte[] wsContent = PacketUtils.mergePackets(side).get();
final byte[] wsContent = PacketUtils.mergePackets(side);
final ByteBuffer buffer = ByteBuffer.wrap(wsContent);
List<Framedata> frames;
@@ -153,14 +147,10 @@ public class WebSocketsParser {
for (Framedata frame : frames) {
if (frame instanceof DataFrame) {
parsedPackets.add(Packet.builder()
parsedPackets.add(
lastPacket.toBuilder()
.content(frame.getPayloadData().array())
.incoming(lastPacket.isIncoming())
.timestamp(lastPacket.getTimestamp())
.ttl(lastPacket.getTtl())
.ungzipped(lastPacket.isUngzipped())
.webSocketParsed(true)
.tlsDecrypted(lastPacket.isTlsDecrypted())
.build()
);
}
@@ -179,13 +169,10 @@ public class WebSocketsParser {
}
private String getHandshake(final List<Packet> packets) {
final String handshake = PacketUtils.mergePackets(packets)
.map(String::new)
.orElse(null);
final String handshake = new String(PacketUtils.mergePackets(packets));
if (handshake == null ||
!handshake.toLowerCase().contains(WEBSOCKET_CONNECTION_HEADER) ||
!handshake.toLowerCase().contains(WEBSOCKET_UPGRADE_HEADER)) {
if (!handshake.toLowerCase().contains(WEBSOCKET_CONNECTION_HEADER)
|| !handshake.toLowerCase().contains(WEBSOCKET_UPGRADE_HEADER)) {
return null;
}

View File

@@ -1,31 +1,30 @@
package ru.serega6531.packmate.tasks;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.boot.autoconfigure.condition.ConditionalOnExpression;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import ru.serega6531.packmate.properties.PackmateProperties;
import ru.serega6531.packmate.service.StreamService;
import java.time.ZonedDateTime;
import java.time.temporal.ChronoUnit;
@Component
@Slf4j
@ConditionalOnProperty(name = "old-streams-cleanup-enabled", havingValue = "true")
@ConditionalOnExpression("${packmate.cleanup.enabled:false} && '${packmate.capture-mode}' == 'LIVE'")
public class OldStreamsCleanupTask {
private final StreamService service;
private final int oldStreamsThreshold;
public OldStreamsCleanupTask(StreamService service, @Value("${old-streams-threshold}") int oldStreamsThreshold) {
public OldStreamsCleanupTask(StreamService service, PackmateProperties properties) {
this.service = service;
this.oldStreamsThreshold = oldStreamsThreshold;
this.oldStreamsThreshold = properties.cleanup().threshold();
}
@Scheduled(fixedDelayString = "PT${cleanup-interval}M", initialDelayString = "PT1M")
@Scheduled(fixedDelayString = "PT${packmate.cleanup.interval}M", initialDelayString = "PT1M")
public void cleanup() {
ZonedDateTime before = ZonedDateTime.now().minus(oldStreamsThreshold, ChronoUnit.MINUTES);
ZonedDateTime before = ZonedDateTime.now().minusMinutes(oldStreamsThreshold);
log.info("Cleaning up old non-favorite streams (before {})", before);
long deleted = service.cleanupOldStreams(before);
log.info("Deleted {} rows", deleted);

View File

@@ -1,10 +1,10 @@
package ru.serega6531.packmate.tasks;
import org.pcap4j.core.PcapNativeException;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.context.event.ApplicationReadyEvent;
import org.springframework.context.event.EventListener;
import org.springframework.stereotype.Component;
import ru.serega6531.packmate.properties.PackmateProperties;
import ru.serega6531.packmate.model.enums.CaptureMode;
import ru.serega6531.packmate.service.PcapService;
import ru.serega6531.packmate.service.ServicesService;
@@ -12,29 +12,23 @@ import ru.serega6531.packmate.service.ServicesService;
@Component
public class StartupListener {
@Value("${enable-capture}")
private boolean enableCapture;
@Value("${capture-mode}")
private CaptureMode captureMode;
private final PackmateProperties packmateProperties;
private final PcapService pcapService;
private final ServicesService servicesService;
public StartupListener(PcapService pcapService, ServicesService servicesService) {
public StartupListener(PcapService pcapService, ServicesService servicesService, PackmateProperties packmateProperties) {
this.pcapService = pcapService;
this.servicesService = servicesService;
this.packmateProperties = packmateProperties;
}
@EventListener(ApplicationReadyEvent.class)
public void afterStartup() throws PcapNativeException {
if (enableCapture) {
servicesService.updateFilter();
if (captureMode == CaptureMode.LIVE) {
if (packmateProperties.captureMode() == CaptureMode.LIVE) {
pcapService.start();
}
}
}
}

View File

@@ -2,10 +2,10 @@ package ru.serega6531.packmate.tasks;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import ru.serega6531.packmate.properties.PackmateProperties;
import ru.serega6531.packmate.model.enums.Protocol;
import ru.serega6531.packmate.pcap.PcapWorker;
@@ -13,7 +13,7 @@ import java.util.concurrent.TimeUnit;
@Component
@Slf4j
@ConditionalOnProperty(name = "capture-mode", havingValue = "LIVE")
@ConditionalOnProperty(name = "packmate.capture-mode", havingValue = "LIVE")
public class TimeoutStreamsSaver {
private final PcapWorker pcapWorker;
@@ -22,14 +22,13 @@ public class TimeoutStreamsSaver {
@Autowired
public TimeoutStreamsSaver(PcapWorker pcapWorker,
@Value("${udp-stream-timeout}") int udpStreamTimeout,
@Value("${tcp-stream-timeout}") int tcpStreamTimeout) {
PackmateProperties properties) {
this.pcapWorker = pcapWorker;
this.udpStreamTimeoutMillis = TimeUnit.SECONDS.toMillis(udpStreamTimeout);
this.tcpStreamTimeoutMillis = TimeUnit.SECONDS.toMillis(tcpStreamTimeout);
this.udpStreamTimeoutMillis = TimeUnit.SECONDS.toMillis(properties.timeout().udpStreamTimeout());
this.tcpStreamTimeoutMillis = TimeUnit.SECONDS.toMillis(properties.timeout().tcpStreamTimeout());
}
@Scheduled(fixedRateString = "PT${timeout-stream-check-interval}S", initialDelayString = "PT${timeout-stream-check-interval}S")
@Scheduled(fixedRateString = "PT${packmate.timeout.check-interval}S", initialDelayString = "PT${packmate.timeout.check-interval}S")
public void saveStreams() {
int streamsClosed = pcapWorker.closeTimeoutStreams(Protocol.UDP, udpStreamTimeoutMillis);
if (streamsClosed > 0) {

View File

@@ -1,20 +1,28 @@
package ru.serega6531.packmate.utils;
import lombok.experimental.UtilityClass;
import org.apache.commons.lang3.ArrayUtils;
import ru.serega6531.packmate.model.Packet;
import java.io.ByteArrayOutputStream;
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
@UtilityClass
public class PacketUtils {
public Optional<byte[]> mergePackets(List<Packet> cut) {
return cut.stream()
public byte[] mergePackets(List<Packet> cut) {
int size = cut.stream()
.map(Packet::getContent)
.reduce(ArrayUtils::addAll);
.mapToInt(c -> c.length)
.sum();
ByteArrayOutputStream os = new ByteArrayOutputStream(size);
cut.stream()
.map(Packet::getContent)
.forEach(os::writeBytes);
return os.toByteArray();
}
public List<List<Packet>> sliceToSides(List<Packet> packets) {

View File

@@ -0,0 +1,3 @@
org.springframework.boot.diagnostics.FailureAnalyzer=\
ru.serega6531.packmate.exception.analyzer.PcapFileNotFoundFailureAnalyzer,\
ru.serega6531.packmate.exception.analyzer.PcapInterfaceNotFoundFailureAnalyzer

View File

@@ -1,6 +1,6 @@
spring:
datasource:
url: "jdbc:postgresql://localhost/packmate"
url: "jdbc:postgresql://localhost:5432/packmate"
username: "packmate"
password: "123456"
driver-class-name: org.postgresql.Driver
@@ -12,22 +12,27 @@ spring:
jdbc:
batch_size: 20
order_inserts: true
temp:
use_jdbc_metadata_defaults: false
database-platform: org.hibernate.dialect.PostgreSQLDialect
server:
compression:
enabled: true
min-response-size: 1KB
enable-capture: true
capture-mode: LIVE # LIVE, FILE, VIEW
interface-name: enp0s31f6
pcap-file: file.pcap
local-ip: "192.168.0.125"
account-login: BinaryBears
account-password: 123456
udp-stream-timeout: 20 # seconds
tcp-stream-timeout: 40 # seconds
timeout-stream-check-interval: 10 # seconds
old-streams-cleanup-enabled: true
old-streams-threshold: 240 # minutes
cleanup-interval: 5 # minutes
ignore-empty-packets: true
packmate:
capture-mode: LIVE # LIVE, FILE, VIEW
interface-name: enp0s31f6
pcap-file: file.pcap
local-ip: "192.168.0.125"
web:
account-login: BinaryBears
account-password: 123456
timeout:
udp-stream-timeout: 20 # seconds
tcp-stream-timeout: 40 # seconds
check-interval: 10 # seconds
cleanup:
enabled: true
threshold: 240 # minutes
interval: 5 # minutes
ignore-empty-packets: true

View File

@@ -3,7 +3,7 @@ package ru.serega6531.packmate;
import org.apache.commons.lang3.ArrayUtils;
import org.junit.jupiter.api.Test;
import ru.serega6531.packmate.model.Packet;
import ru.serega6531.packmate.service.optimization.HttpGzipProcessor;
import ru.serega6531.packmate.service.optimization.HttpProcessor;
import ru.serega6531.packmate.service.optimization.HttpUrldecodeProcessor;
import ru.serega6531.packmate.service.optimization.PacketsMerger;
@@ -26,18 +26,18 @@ class StreamOptimizerTest {
List<Packet> list = new ArrayList<>();
list.add(p);
new HttpGzipProcessor(list).unpackGzip();
new HttpProcessor().process(list);
final String processed = list.get(0).getContentString();
assertTrue(processed.contains("aaabbb"));
}
@Test
void testUrldecodeRequests() {
Packet p = createPacket("GET /?q=%D0%B0+%D0%B1 HTTP/1.1\r\n\r\n".getBytes(), true);
Packet p = createPacket("GET /?q=%D0%B0+%D0%B1 HTTP/1.1\r\nHost: localhost:8080\r\n\r\n".getBytes(), true);
List<Packet> list = new ArrayList<>();
list.add(p);
new HttpUrldecodeProcessor(list).urldecodeRequests();
new HttpUrldecodeProcessor().urldecodeRequests(list);
final String processed = list.get(0).getContentString();
assertTrue(processed.contains("а б"));
}
@@ -59,7 +59,7 @@ class StreamOptimizerTest {
list.add(p5);
list.add(p6);
new PacketsMerger(list).mergeAdjacentPackets();
new PacketsMerger().mergeAdjacentPackets(list);
assertEquals(4, list.size());
assertEquals(2, list.get(1).getContent().length);
@@ -67,6 +67,18 @@ class StreamOptimizerTest {
assertEquals(2, list.get(3).getContent().length);
}
@Test
void testChunkedTransferEncoding() {
String content = "HTTP/1.1 200 OK\r\nTransfer-Encoding: chunked\r\n\r\n" +
"6\r\nChunk1\r\n6\r\nChunk2\r\n0\r\n\r\n";
List<Packet> packets = new ArrayList<>(List.of(createPacket(content.getBytes(), false)));
new HttpProcessor().process(packets);
assertEquals(1, packets.size());
assertTrue(packets.get(0).getContentString().contains("Chunk1Chunk2"));
}
private Packet createPacket(int content, boolean incoming) {
return createPacket(new byte[] {(byte) content}, incoming);
}