Filecatalyst - Workload Automation
# Send 10 files in parallel ls /data/to_send/*.dat | xargs -P 10 -I {} fta-cli --put {} --target /remote/ Check file hash before transfer.
from airflow import DAG from airflow.operators.bash import BashOperator from datetime import datetime default_args = 'retries': 3 with DAG('fc_transfer_dag', start_date=datetime(2024,1,1), schedule='0 2 * * *', default_args=default_args) as dag: transfer = BashOperator( task_id='send_to_fc', bash_command='fta-cli --server fc.prod.com --put /daily/report.csv --target /archive/' ) Enable detailed logs: filecatalyst workload automation
# Retry up to 3 times RETRIES=3 for i in $(seq 1 $RETRIES); do fta-cli --put critical_file.dat --target /incoming/ && break || sleep 10 done | Tool | Integration Method | |------|--------------------| | Apache Airflow | Use BashOperator with fta-cli or SimpleHttpOperator for REST API | | Jenkins | Execute shell script step calling fta-cli | | Rundeck | Create a job step: "Command" → fta-cli ... | | Control-M | FileCatalyst provides a Control-M plugin (File Transfer Hub) | | Apache NiFi | Use ExecuteProcess processor to call fta-cli | # Send 10 files in parallel ls /data/to_send/*
def get_queue_depth(): resp = requests.get("http://fc-server:8080/api/transfers?status=PENDING") return len(resp.json()) if get_queue_depth() > 50: alert("FileCatalyst backlog critical") Method C: REST API – Best for Centralized
import requests import time API_BASE = "http://fc-server:8080/api" API_KEY = "your-api-key" def run_transfer(local_path, remote_path): payload = "source": local_path, "destination": remote_path, "server": "destination-host", "username": "transfer_user", "password": "secret"
hotfolder.watch.dir=/opt/fc/watch hotfolder.target.server=192.168.1.100 hotfolder.target.port=11001 hotfolder.target.user=hotfold_user hotfolder.target.password=encrypted_pass hotfolder.target.directory=/uploads hotfolder.post.delete=true # Delete local after success hotfolder.compress=true # On-the-fly compression A monitoring system drops log files every hour → FileCatalyst transfers them to a central archive. Method C: REST API – Best for Centralized Workload Orchestration The FileCatalyst Server exposes a REST API (port 8080 by default) for managing transfers, users, and monitoring.
| Action | Endpoint | Method | |--------|----------|--------| | Trigger transfer | /api/transfer | POST | | Get transfer status | /api/transfer/id | GET | | List active transfers | /api/transfers | GET | | Create user | /api/users | POST |