A simple and straightforward way to utilize AGILITY is by provisioning an existing Cloud image. This option is available in both public and private cloud environments.
The access to the VM image will be provided by B-Yond.
Public Clouds:
AWS
: The AMI (Amazon Machine Image) ID will be shared with target account.
Azure
: The Azure VM image will be shared with target subscription/tenant.
Google Cloud
: The Google Cloud VM image will be shared with target organization.
On-Premises Virtualization Platforms:
OpenStack
: Download the provided qcow2 image specifically configured for OpenStack.
VMware
: Download the provided VMware disk image specifically configured for VMware virtualization environments.
If you are using other Cloud providers or virtualization solutions, you may need to convert the qcow2 or VMware disk images to the format required by your platform. Consult the documentation of your specific provider or platform for instructions on image conversion.
Using the B-Yond provided images is recommended as they are pre-configured and optimized for running AGILITY.
Initial specifications depending on the number of PCAP files to be processed daily:
Number of files per Busy Hour | Average number of packets per file | CPU | Memory (GB) | Boot Disk (GB) |
---|---|---|---|---|
50 | 2.5K | 12 | 48 | 50 |
100 | 2.5K | 16 | 64 | 50 |
50 | 25K | 16 | 64 | 75 |
25 | 250K | 24 | 96 | 150 |
ℹ️ Assumptions: processing 100 files/day, retention period is 3 days.
Please note that the disk requirement applies either to the boot disk, when it is the sole storage option, or to the external disk when it is used as an alternative.
ℹ️ The values associated with AGILITY application or its monitoring stack can be customized according to specific requirements and file sizes.
(As an administrator) Create an image:
glance image-create --disk-format qcow2 --container-format bare --file ./Agility-X.YY.Z-AlmaLinux-X-GenericCloud-X.Y-YYYYMMDD.x86_64.qcow2 --min-disk 25 --min-ram 2048 --name Agility-X.Y.Z
(As an administrator) Create a member for the glance image:
glance member-create <image-id> <member-id>
(As an administrator) Accept the membership for the glance image:
glance member-update <image-id> <member-id> accepted
(As a user) Create a VM using the image (minimum use m1.medium
which is 2 CPU / 4096 RAM / 40G disk):
openstack server create --flavor <your-flavor> --image <image-id> agility --nic net-id=<network-id> --security-group <your-security-group> --key-name <your-key>
Follow the procedures specified by your Cloud provider. These procedures typically include the following steps:
init-cloud
script to run (this is in general an optional step).ℹ️ The VM boot time might take between 5 and 10 minutes in total.
To import a virtual machine stored on a VMware Hosted product to an ESX/ESXi host, run:
vmkfstools -i virtual_machine.vmdk /vmfs/volumes/datastore/my_virtual_machine_folder/virtual_machine.vmdk
Guest OS: Other Linux (64-bit)
root
, password almalinux
echo 1>/sys/class/block/sda/device/rescan
2
with fdisk
printf "d\n\nn\n\n\n\np\nw\n" | fdisk /dev/sda
xfs_growfs /dev/sda2
Configure ssh options, e.g. set authorized kes for default cloud-user almalinux
or another user.
ℹ️ For ESXi 8.0, use
Guest OS: Other Linux (64-bit)
enable the LSI Logic parallel SCSI controller option.
SSH into it using the cloud-user
and the associated private key:
ec2-user
almalinux
ssh -i <private_key> <cloud-user>@<vm_ip>
Verify that all components are up and running:
sudo su -
kubectl get pods -A
All Kubernetes pods should be in Running and Ready status.
[!WARNING] When some pods are not running. e.g. Kafka, Zookeeper, etc., they can be deleted and that action might fix the issue. A VM reboot is recommend instead.
Access the user interface (UI):
Open your browser and put the AGILITY VM IP or FQDN, .e.g https://10.0.0.1/cv/
Use the following credentials:
username: agility-admin@b-yond.com
password: agility-admin@b-yond.com
ℹ️ The default password has to be changed after first login. Later, it can be modified following the Manage Agility Local Users section.
ℹ️ The DNS server is by default provided via DHCP. This section is relevant if you need to specify an additional DNS server or if the DHCP option is unavailable.
To configure nameservers, domain search suffixes, etc., use the NetworkManager tool:
Check the current DNS configuration
cat /etc/resolv.conf
An example output:
cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 169.254.169.254
Identify the network connection to configure:
sudo nmcli con show
This is an example of the output:
NAME UUID TYPE DEVICE
System eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 ethernet eth0
cni0 4e4c9ecf-cc82-49eb-bda6-99317c953691 bridge cni0
flannel.1 021d7133-dad5-4a02-a035-b11009ac943a vxlan flannel.1
Add a new DNS server:
sudo nmcli con mod <connection-name> +ipv4.dns <dns-server-ip>
sudo nmcli con up <connection-name>
For example, to add Google’s DNS server to the device eth0
, the commands will be:
sudo nmcli con mod "System eth0" +ipv4.dns 8.8.8.8
sudo nmcli con up "System eth0"
To check the change run again:
$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 8.8.8.8
nameserver 169.254.169.254
For reference purposes, to remove the DNS server specified by DHCP run the following commands:
sudo nmcli con mod <connection-name> ipv4.ignore-auto-dns yes
sudo nmcli con up <connection-name>
This will leave only the DNS servers configured manually.
Use the ipv4.dns-search option to change the domain name if necessary. Ensure that the correct fully qualified domain name (FQDN) is set before by using the hostnamectl set-hostname
command.
These commands have to be executed, adjust accordingly:
sudo nmcli con mod <connection-name> +ipv4.dns-search <domain>
sudo nmcli con up <connection-name>
For example, to add a domain name in the search list (here example.com), run:
$ sudo nmcli con mod "System eth0" +ipv4.dns-search example.com
$ sudo nmcli con up "System eth0"
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)
$ cat /etc/resolv.conf
# Generated by NetworkManager
search example.com
nameserver 8.8.8.8
nameserver 169.254.169.254
[!WARNING] There should be at least 1 (one) nameserver defined for AGILITY in the VM.
ℹ️ This section is crucial for situations where non-default NTP servers are required or when there are limitations in accessing external public ones.
AGILITY VM facilitates clock synchronization using the Chrony
service, which is enabled by default and synchronizes with a pool of public NTP servers.
To synchronize the VM clock with a specific NTP server, follow these instructions:
Check the current configured servers:
chronyc sources
Example output:
$ chronyc sources
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^- 68.64.173.196 2 10 377 132 +6288us[+6288us] +/- 119ms
^- tick.srs1.ntfo.org 3 9 377 57 -693us[ -693us] +/- 148ms
^* ntp1.wiktel.com 1 10 377 738 +362us[ +214us] +/- 22ms
^+ 23.150.40.242 2 10 377 103 -1361us[-1361us] +/- 32ms
Add your server definition in the file /etc/chrony.conf
:
server <my-server-ip>
For example, using a public cloud NTP server:
echo "server 169.254.169.254" | sudo tee -a /etc/chrony.conf
Comment out the entry pool 2.almalinux.pool.ntp.org iburst
to enforce using only the specified NTP server:
sudo sed -i '/^pool 2\.almalinux\.pool\.ntp\.org iburst/s/^/#/' /etc/chrony.conf
Restart the Chrony service:
sudo systemctl restart chronyd
Check the changes were applied (Wait until the status changes from ^?
to ^*
, it might take several minutes):
chronyc sources
Example output:
$ chronyc sources
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 169.254.169.254 2 6 3 51 -491us[ -491us] +/- 23ms
Enable NTP and trigger a synchronization:
sudo timedatectl set-ntp true
sudo chronyc -a makestep
Verify the clock is synchronized:
timedatectl
The output should resemble:
$ timedatectl
Local time: Mon 2024-03-11 22:05:42 UTC
Universal time: Mon 2024-03-11 22:05:42 UTC
RTC time: Mon 2024-03-11 22:05:43
Time zone: UTC (UTC, +0000)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
To confirm chrony tracking, run the command:
chronyc tracking
The output also shows the configured NTP server:
Reference ID : A9FEA9FE (169.254.169.254)
Stratum : 3
Ref time (UTC) : Mon Mar 11 22:06:03 2024
System time : 0.000000751 seconds slow of NTP time
Last offset : -0.000023889 seconds
RMS offset : 0.000017941 seconds
Frequency : 18.960 ppm slow
Residual freq : -0.001 ppm
Skew : 0.011 ppm
Root delay : 0.000524478 seconds
Root dispersion : 0.010530258 seconds
Update interval : 1026.3 seconds
Leap status : Normal
Ensure the Chrony service is available after reboot:
sudo systemctl enable chronyd
Your system’s timezone settings are stored in the /usr/share/zoneinfo
directory. To ensure your system is set to the appropriate timezone, such as Europe/Paris, execute the following command:
sudo timedatectl set-timezone Europe/Paris
Additionally, you can confirm your current timezone by inspecting the /etc/localtime
file:
ls -l /etc/localtime
In cases where external disk attachment is necessary, follow these steps. This will depend on the type of external disk used.
Access the VM using ssh
Stop the processes:
sudo su -
systemctl stop k3s
Place the persisted data into a different location:
mv /var/lib/rancher/k3s/storage /var/lib/rancher/k3s/storage-bkp
Create a directory on your Ubuntu system to serve as the mount point for the NFS share:
sudo mkdir -p /var/lib/rancher/k3s/storage
Edit the /etc/fstab file as root using a text editor, such as nano or vim:
sudo nano /etc/fstab
Add an entry at the end of the /etc/fstab file to specify the NFS share and the mount point. The entry should follow this format:
<NFS_server_IP_or_hostname>:<remote_directory> <local_mount_point> nfs defaults 0 0
Replace
For example, if the NFS server IP address is 192.168.1.100 and the remote directory you want to mount is /data, the entry would look like this:
192.168.1.100:/data /var/lib/rancher/k3s/storage nfs defaults 0 0
Save the changes and exit the text editor.
To mount all entries listed in /etc/fstab, you can use the mount -a
command.
Ensure that your VM has network connectivity to the NFS server and that you have the necessary permissions to access the NFS share.
Your AGILITY cloud provider gives you the ability to provision block storage and attach the disk to your VM. Please follow the recommended procedures. E.g., it involves several iscsi
commands executions.
Once Attached, format the disk (e.g., sdb
):
export DEV_PATH=sdb
export MOUNT_PATH=/var/lib/rancher/k3s/storage
sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/${DEV_PATH}
sudo mkdir -p ${MOUNT_PATH}
sudo mount -o discard,defaults /dev/${DEV_PATH} ${MOUNT_PATH}
sudo chmod 775 ${MOUNT_PATH}
Persist the changes:
sudo cp /etc/fstab /etc/fstab.backup
UUID=$(sudo blkid -s UUID -o value /dev/${DEV_PATH})
echo $UUID
echo UUID=$(sudo blkid -s UUID -o value /dev/${DEV_PATH}) ${MOUNT_PATH} ext4 _netdev,nofail 0 2 | sudo tee -a /etc/fstab
Copy the data to the newly mounted external location:
sudo su -
cp -R /var/lib/rancher/k3s/storage-bkp/* /var/lib/rancher/k3s/storage/
Start the processes:
systemctl start k3s
Wait a few seconds and ensure that all services are in the Running state:
kubectl get pods -n agility
Verify that the system is functioning correctly by performing tasks in the UI.
Once you have confirmed everything is working as expected, you can delete the old data:
rm -fr /var/lib/rancher/k3s/storage-bkp
Please note that these steps assume you have the necessary permissions and understand the implications of deleting the old data. Exercise caution while performing these operations.