Network Automation

Cisco NSO Orchestration Framework

Multi-platform network orchestration using Cisco NSO

Built a production-grade orchestration framework leveraging Cisco NSO (Network Services Orchestrator) to manage multi-vendor network devices. The framework supports Cisco IOS-XE, IOS-XR, and NX-OS platforms, providing intent-based configuration and reconciliation capabilities.

Key Features

  • Multi-platform support (IOS-XE, IOS-XR, NX-OS)
  • Intent-based configuration model
  • Dry-run mode for safe testing
  • Configuration reconciliation engine
  • RESTCONF API integration
  • Jinja2 templating for flexibility
  • Rollback capabilities

Technologies Used

Python Cisco NSO RESTCONF Jinja2 YANG Models

What I Learned

This project deepened my understanding of network automation beyond simple scripting. Working with NSO taught me about intent-based networking, service modeling, and the importance of reconciliation in production environments. The RESTCONF API integration showed me how modern network management can follow DevOps principles.

Nokia SROS Network Orchestration

Production-grade 9-layer orchestration framework for Nokia SROS routers

Production-grade Infrastructure as Code framework specifically for Nokia SROS routers with enterprise-level orchestration capabilities. The 9-layer architecture provides validation, templating, NETCONF communication, reconciliation, auditing, configuration management, error handling, change tracking, and compliance logging.

Key Features

  • 9-Layer Architecture Framework: Validation, templating, NETCONF communication, reconciliation engine, auditing, configuration management, error handling, change tracking, and compliance logging
  • Production-Safe Deployments: Confirmed commits with automatic rollback on failure, pre/post-deployment validation and verification
  • Comprehensive Audit & Compliance: Detailed audit logging, change tracking for troubleshooting, configuration version control, compliance reporting
  • NETCONF Protocol Integration: Native NETCONF communication, efficient configuration management, state reconciliation, configuration drift detection

Technologies Used

Python Nokia SROS NETCONF Jinja2 Infrastructure as Code Orchestration

What I Learned

Nokia SROS-specific automation patterns, NETCONF protocol implementation, production-safe deployment strategies, reconciliation and drift detection, enterprise network orchestration. This project taught me how to build production-grade automation that's reliable enough for critical infrastructure managing large-scale router deployments.

Note: This project represents professional work and is not publicly available. Architecture overview and technical discussion available during interviews.

VxLAN EVPN Spine-Leaf Fabric

Production-style data center fabric with Python IaC automation on Cisco Nexus 9000

Full VxLAN EVPN spine-leaf fabric deployed and verified end-to-end in Cisco's DevNet Nexus Dashboard sandbox using Nexus 9000v hardware running NX-OS 10.6. Covers the complete stack: OSPF underlay, BGP EVPN overlay with spine route reflectors, multi-tenant VRFs, anycast distributed gateway, ARP suppression, and PIM sparse-mode multicast for BUM traffic. Includes a Python IaC framework that automates VRF and network deployment against the NDFC REST API with idempotent plan/apply and state tracking.

Fabric Architecture

  • Underlay: OSPF point-to-point /31 links, MTU 9216 throughout, loopback-based router IDs
  • Overlay: BGP EVPN ASN 65001, two spines as route reflectors with cluster-id for redundancy
  • BUM handling: PIM sparse-mode multicast with anycast RP — scales better than ingress replication
  • Multi-tenancy: PRODUCTION and DEV VRFs with dedicated L3 VNIs (50001/50002)
  • L2 networks: Four VNIs (Web, Database, App, Dev) with ARP suppression active on all
  • Gateway model: Anycast — identical IP and MAC on every leaf for optimal local routing
  • Data plane verified: Cross-VTEP pings confirmed, NVE peers showing CP (control plane) learning
  • DCI: Border Gateway config on spines, Site 2 fabric in progress

Python IaC Framework — NDFC REST API

  • Declarative YAML desired state — define VRFs and Networks, tool handles create/attach/deploy
  • Full three-step NDFC lifecycle per resource: create definition → attach to leaf switches → deploy to hardware
  • Dynamic leaf serial number fetching — handles new serial numbers on every sandbox reservation
  • Idempotent plan/apply — confirmed clean after sync against a correctly deployed fabric
  • Correct lanAttachList payload structure with all required fields (learned through 500 errors)
  • Sync correctly parses NDFC's nested JSON-in-string templateConfig fields for accurate diff

Notable Troubleshooting (7 issues documented)

  • TCAM carving: ARP suppression requires arp-ether TCAM region pre-allocated. Region defaulted to size 0 — reduced racl from 1536 to 1024 (must be multiples of 512), allocated 256 to arp-ether, reloaded. Production lesson: TCAM planning must happen before fabric onboarding, not after.
  • NDFC API payload structure: VRF attach required lanAttachList wrapper with switch serial numbers and four required fields — 500 errors until correct structure was identified from the OpenAPI spec.
  • templateConfig parsing: NDFC embeds vlanId, gateway, and suppressArp as a serialized JSON string inside the API response — sync produced false positives on plan until parsed correctly.

Technologies Used

Cisco NX-OS NDFC 12.x BGP EVPN VxLAN Python REST API Infrastructure as Code

What I Learned

This project connected my SP networking background directly to data center fabric design. The VxLAN EVPN control plane maps directly to MP-BGP VPN concepts I knew from production — L3 VNIs are the VPN label, VTEP loopbacks are PE loopbacks, spines are P routers, and RT/RD mechanisms are identical. Building the IaC framework against a real vendor API taught me how controllers like NDFC abstract hardware configuration and how to reverse-engineer underdocumented API behavior through systematic debugging.

9 Devices
7 Issues Documented
2-Site DCI In Progress

Cisco Meraki Network Orchestration

Enterprise automation suite covering disaster recovery, multi-site management, monitoring, and zero-trust security

Production-grade automation suite for Cisco Meraki infrastructure built on the Meraki Dashboard API and Python SDK. Covers the full operational lifecycle: automated configuration backup and disaster recovery, multi-site template deployment, continuous network health monitoring, zero-trust security segmentation, and switch port automation. Developed and tested against the Cisco DevNet Meraki sandbox.

Feature Modules (15+ scripts)

  • Disaster Recovery: Automated full-network backup to timestamped JSON — VLANs, firewall rules, SSIDs, switch ports, group policies. Intelligent filtering handles Meraki's auto-generated rules during restore. One-command restoration of entire network configuration.
  • Change Tracking: Backup comparison tool identifies configuration drift between any two snapshots — useful for pre/post-change auditing and compliance documentation.
  • Multi-Site Management: Configuration templates deployed consistently across branch sites. Automated compliance verification confirms all sites match the template standard.
  • Network Monitoring: Health checks covering device status, latency/packet loss, client tracking by SSID/VLAN, switch PoE status, and VLAN segmentation validation. Designed for cron-based automated monitoring.
  • Zero-Trust Security: Group policies with bandwidth limits, scheduled access windows, and per-user firewall rules. Executive, Employee, Contractor, and Guest tiers with deny-by-default segmentation.
  • Switch Automation: Bulk port configuration — workstation ports (data + voice VLAN + PoE), IoT isolation, guest access, trunk uplinks. Configures 100+ ports programmatically.

Network Topology Automated

  • Four VLANs: Corporate (full access), Guest (internet only), IoT (isolated), Voice (QoS)
  • Four SSIDs mapped to VLANs: Corporate WPA2, Guest open, IoT WPA2, Voice WPA2
  • Ten firewall rules with zero-trust segmentation between segments
  • Switch ports 1-22 configured by role: workstations, IoT, guest, trunk uplinks

Technologies Used

Python Meraki Dashboard API Meraki SDK REST APIs Network Automation

What I Learned

Meraki's API-first design made it an ideal platform for learning enterprise network automation patterns. The Dashboard API exposes every configuration element as a REST resource, which reinforced the principle that modern network management is fundamentally software integration. Building the disaster recovery system taught me to think about operational resilience beyond just backups — intelligent filtering, restore validation, and change auditing are what make recovery actually work under pressure. The zero-trust group policy implementation showed how role-based network segmentation translates to concrete API calls.

15+ Automation Scripts
3,000+ Lines of Code
4 Security Tiers

AWS Cloud Infrastructure

AWS Transit Gateway Hub-and-Spoke

Multi-VPC networking with centralized routing

Implemented hub-and-spoke architecture connecting multiple VPCs using AWS Transit Gateway. Demonstrates transitive routing, automatic route propagation, and multi-AZ high availability design. Reduces network complexity from O(N²) with VPC peering to O(N) with centralized routing.

Architecture Features

  • Transit Gateway hub connecting multiple VPCs
  • Automatic route propagation to TGW route tables
  • Dedicated TGW subnets in multiple availability zones
  • Multi-AZ high availability (automatic failover)
  • Transitive routing between all connected VPCs
  • LocalStack testing for cost-free validation
  • Infrastructure as Code with Pulumi

Technologies Used

AWS Transit Gateway VPC Pulumi Python LocalStack

What I Learned

This project showed me how enterprise cloud networking differs from VPC peering. Understanding transitive routing, automatic route propagation, and the scalability advantages of hub-and-spoke architecture gave me insight into how large organizations connect dozens of VPCs. The multi-AZ design taught me about cloud resilience patterns.

~24 AWS Resources
$0.00 LocalStack Testing
O(N) Scaling Complexity

AWS Site-to-Site VPN

Hybrid cloud connectivity with IPSec and BGP

Built hybrid cloud VPN infrastructure connecting on-premises networks to AWS using VPN Gateway and strongSwan. Implements dual IPSec tunnels with BGP dynamic routing for high availability and automatic failover. Comprehensive documentation demonstrates understanding of hybrid cloud patterns.

Implementation Details

  • AWS VPN Gateway with dual IPSec tunnels
  • strongSwan VPN on EC2 (simulated on-prem)
  • BGP dynamic routing for automatic failover
  • Multi-AZ architecture for resilience
  • Network security with security groups and NACLs
  • Traffic flow analysis and debugging
  • Cost-optimized design (~$36/month)

Technologies Used

AWS VPN Gateway IPSec BGP strongSwan Pulumi Python

What I Learned

This project connected my networking background with cloud infrastructure. Understanding how IPSec tunnels work in AWS, configuring BGP for automatic failover, and troubleshooting VPN connectivity issues gave me practical experience with hybrid cloud patterns. The documentation process helped me understand the architecture deeply enough to explain it clearly.

Serverless IPAM System

IP address management with Lambda and DynamoDB

Built IP Address Management system using serverless architecture. Handles subnet planning, IP allocation, and conflict detection using Lambda, DynamoDB, and API Gateway. Includes automated CI/CD pipeline with GitHub Actions and comprehensive testing with LocalStack.

System Features

  • Lambda functions for subnet and IP operations
  • DynamoDB for scalable data storage
  • API Gateway for RESTful interface
  • Automated conflict detection
  • CI/CD pipeline with GitHub Actions
  • Unit tests and integration tests
  • LocalStack for local development
  • Cost: $0.60/month (well within free tier)

Technologies Used

AWS Lambda DynamoDB API Gateway GitHub Actions Python LocalStack

What I Learned

First serverless project taught me event-driven architecture and the Lambda programming model. Setting up CI/CD with GitHub Actions introduced me to automated testing and deployment. Working with DynamoDB showed me NoSQL design patterns. The LocalStack testing environment taught me how to develop AWS applications locally without incurring costs.

$0.60 Monthly Cost
100% Test Coverage
CI/CD Automated

Portfolio Website

Static site hosting with S3 and CloudFront CDN

This website! Built as a static portfolio hosted on S3 with global CDN delivery through CloudFront. Deployed using Infrastructure as Code with Pulumi. Features dark/light theme toggle and HTTPS encryption. Demonstrates web development skills alongside infrastructure knowledge.

Implementation

  • S3 static website hosting
  • CloudFront CDN for global delivery
  • Origin Access Control (OAC) for security
  • HTTPS with AWS managed certificates
  • Dark/light theme with CSS variables
  • Responsive design for mobile devices
  • Infrastructure as Code with Pulumi
  • Cost: ~$0.50/month

Technologies Used

AWS S3 CloudFront Pulumi HTML/CSS/JS Python

What I Learned

Building this portfolio taught me about CloudFront distribution, S3 static hosting, and Origin Access Control. The responsive CSS and theme toggle improved my frontend skills. Using Pulumi to deploy the infrastructure showed me how even simple websites benefit from Infrastructure as Code for repeatability and version control.

AI Document Summarizer

Serverless document summarization using OpenAI GPT-3.5-turbo

Serverless document summarization service using OpenAI GPT-3.5-turbo for AI-powered text analysis. Features user-configurable summary lengths (brief, standard, detailed, comprehensive) with dynamic prompt engineering. Implements comprehensive property-based testing using the Hypothesis framework with over 1,100 automatically generated test cases covering edge cases and boundary conditions.

Key Features

  • PDF and text file processing with PyPDF2
  • User-configurable summary lengths with adaptive prompts
  • Property-based testing with Hypothesis (1,100+ test cases)
  • Comprehensive input validation and error handling
  • Responsive web interface with drag-and-drop upload
  • Real-time summary display with text statistics
  • Direct processing (no S3/DynamoDB storage complexity)
  • Cost-optimized: ~$1/month for moderate use

Technologies Used

AWS Lambda API Gateway OpenAI API Pulumi Python Hypothesis

What I Learned

This project taught me OpenAI API integration, prompt engineering for optimal AI responses, and the power of property-based testing. The Hypothesis framework automatically generated edge cases I hadn't considered, catching bugs before deployment. Implementing configurable summary lengths required dynamic prompt construction. The serverless architecture demonstrates cost-effective AI integration without infrastructure overhead.

1,100+ Test Cases
4 Summary Lengths
~$1 Monthly Cost

Multi-Site Datacenter Migration to AWS

Hybrid cloud architecture with Transit Gateway and Site-to-Site VPN

Architected comprehensive multi-site datacenter migration plan to AWS using Transit Gateway and Site-to-Site VPN. Designed phase-based deployment strategy with detailed planning for risk minimization and business continuity. Created hybrid cloud architecture blueprint connecting on-premises infrastructure to AWS VPCs with Infrastructure as Code templates for repeatable deployments.

Project Scope: Conceptual architecture and Phase 1 (infrastructure setup) completed. Subsequent migration phases (Phase 2-4: application migration, data transfer, cutover) not implemented due to lack of production infrastructure for realistic testing. Demonstrates migration planning, architecture design, and hybrid connectivity patterns applicable to real enterprise migrations.

Architecture Components

  • Transit Gateway hub-and-spoke design
  • Site-to-Site VPN for on-premises connectivity
  • Multi-VPC network segmentation
  • Phased migration strategy with rollback plans
  • Cost optimization analysis and modeling
  • Infrastructure as Code with Pulumi
  • Security architecture (IAM, Security Groups, NACLs)
  • Secure ECS Fargate Application

    Production-grade containerized application with security best practices

    Security-hardened containerized application deployment using ECS Fargate with production-grade architecture. Implements private subnets with VPC endpoints (eliminating NAT Gateway costs), Secrets Manager for credential management, Multi-AZ RDS PostgreSQL, and auto-scaling. Demonstrates container orchestration with comprehensive security controls.

    Architecture Highlights

    • ECS Fargate tasks in private subnets (no public IPs)
    • VPC endpoints for AWS services (ECR, Secrets Manager, CloudWatch)
    • Secrets Manager integration (no hardcoded credentials)
    • Multi-AZ RDS PostgreSQL with encryption at rest
    • Application Load Balancer with health checks
    • Auto-scaling (2-4 tasks based on CPU utilization)
    • IAM roles following least-privilege principles
    • CloudWatch Logs and VPC Flow Logs for monitoring

    Security Features

    • Network Security: Private subnets only, security groups with least privilege, VPC Flow Logs
    • IAM Security: Task execution role and task role separation, no hardcoded credentials
    • Data Security: RDS encryption at rest (KMS), encryption in transit (SSL), automated backups
    • Container Security: ECR image scanning, non-root user, resource limits

    Technologies Used

    ECS Fargate Docker RDS PostgreSQL Secrets Manager ALB Pulumi Python

    What I Learned

    This project taught me containerized application deployment at scale with security as the primary focus. Using VPC endpoints instead of NAT Gateway demonstrated cost optimization while maintaining security. The separation of task execution roles (infrastructure permissions) from task roles (application permissions) showed proper IAM design. Multi-AZ RDS deployment with automated backups demonstrated high availability patterns. The project bridges traditional infrastructure knowledge with modern container orchestration.

    $119 Monthly Cost
    Multi-AZ High Availability
    2-4 Auto-Scaling Tasks

    Enterprise AWS Security Architecture

    Defense-in-depth security with Network ACLs, Security Groups, and IAM

    Enterprise-grade AWS security architecture implementing defense-in-depth principles with multiple security layers. Demonstrates Network ACLs (stateless subnet-level firewall), Security Groups (stateful instance-level firewall), IAM least-privilege roles, VPC Flow Logs for traffic analysis, and CloudWatch monitoring. Follows AWS Well-Architected Framework security pillar best practices.

    Security Layers Implemented

    • Layer 1 - Network ACLs: Stateless subnet-level firewall with explicit allow/deny rules
    • Layer 2 - Security Groups: Stateful instance-level firewall with tiered architecture
    • Layer 3 - IAM Roles: Least-privilege access control, service-specific permissions
    • Layer 4 - Monitoring: VPC Flow Logs, CloudWatch dashboards, EventBridge alerting
    • Multi-tier architecture: Bastion, Web, Application, Database tiers
    • Network segmentation: Public and private subnets across 3 availability zones
    • S3 server-side encryption at rest (AES-256)
    • Zero Trust principles: Deny-by-default, explicit allow rules

    Security Group Tiers

    • Bastion Tier: SSH jump host, SSM Session Manager (recommended over SSH)
    • Web Tier: HTTP/HTTPS from internet, SSH from bastion only
    • Application Tier: Port 8080 from web tier, SSH from bastion only
    • Database Tier: PostgreSQL/MySQL from app tier, SSH from bastion only

    Technologies Used

    VPC Security Groups Network ACLs IAM VPC Flow Logs CloudWatch Pulumi Python

    What I Learned

    This project deepened my understanding of AWS security beyond basic configurations. Implementing both Network ACLs (stateless) and Security Groups (stateful) showed the importance of defense-in-depth—multiple security layers providing redundancy if one fails. The distinction between subnet-level and instance-level firewalls became clear through practical implementation. IAM role design for each tier (bastion, web, app, database) demonstrated least-privilege access patterns. VPC Flow Logs analysis revealed network traffic patterns useful for troubleshooting and security auditing. This architecture aligns with AWS Well-Architected Framework and enterprise security requirements.

    4 Security Layers
    Multi-AZ 3 Availability Zones
    ~$35 Monthly Cost

Technologies Used

AWS Transit Gateway Site-to-Site VPN VPC Pulumi Python

What I Learned

This project taught me enterprise migration planning, phase-based deployment strategies, and the complexities of hybrid cloud architecture. Designing connectivity between on-premises and AWS requires understanding both traditional networking (BGP, IPSec) and cloud networking (Transit Gateway, route propagation). The planning process highlighted risk management, business continuity, and cost modeling essential for real-world migrations.

Infrastructure as Code: Pulumi vs Terraform

All AWS projects in this portfolio are deployed using Pulumi with Python. While Terraform is the industry standard and widely adopted, Pulumi offers several advantages that made it my choice for learning and personal projects. That said, I'm tool-agnostic and ready to use Terraform if that's what the team or organization prefers.

Why I Chose Pulumi

Use Real Programming Languages: Pulumi uses Python, TypeScript, Go, or C# instead of HCL. This means I can use familiar programming constructs (loops, conditionals, functions), leverage existing libraries, and apply software engineering best practices. For network engineers learning DevOps, using Python for both automation scripts and infrastructure deployment creates consistency and reduces the learning curve.

Native Language Features

Real Programming Power: Need to process a list of subnets dynamically? In Pulumi, I use Python's list comprehensions and standard library. Need to make an API call during deployment? Import the requests library. Terraform requires custom providers or complex workarounds for similar tasks. Pulumi feels natural to developers and scripters.

Excellent IDE Support

Better Developer Experience: Because Pulumi uses real programming languages, I get full IDE support: autocomplete, type checking, inline documentation, and immediate error detection. Writing infrastructure code feels like writing any other Python application, with IntelliSense helping discover available resources and their properties.

Testing Flexibility

Unit and Integration Testing: Pulumi integrates with standard testing frameworks (pytest, unittest). I can test infrastructure logic before deployment, mock resources, and validate configurations. This project's Hypothesis testing (1,100+ test cases) demonstrates the testing capabilities available when using real programming languages.

Quick Comparison

Feature Pulumi Terraform
Language Python, TypeScript, Go, C# HCL (HashiCorp Configuration Language)
Learning Curve Lower (if you know Python) Learn new syntax (HCL)
IDE Support Full IntelliSense, autocomplete Basic (HCL language server)
Testing pytest, unittest, standard frameworks Terratest, custom testing
Maturity Newer (2018) Established (2014)
Adoption Growing Industry Standard
Provider Ecosystem Good (100+ providers) Excellent (1000+ providers)

The Bottom Line: Pulumi was the right choice for my learning journey because it leverages Python skills I already have and aligns with network automation patterns (Python scripts, NETCONF, API integration). The programming-language approach made Infrastructure as Code feel familiar rather than learning yet another DSL.

In a Professional Setting: If a team or company is already using Terraform, I'm absolutely comfortable using it. The concepts (state management, providers, resources, modules) are identical—only the syntax differs. Learning Terraform's HCL is straightforward when you understand IaC principles. The tool matters less than the methodology, and I'm tool-agnostic in professional environments.

My Approach: Use the best tool for the job and the team. Pulumi for personal projects where Python integration adds value. Terraform where it's the standard. Both are excellent tools, and proficiency with either demonstrates understanding of modern infrastructure practices.

Project Cost Management

All projects are designed with cost optimization in mind. Using LocalStack for testing, AWS Free Tier services, and serverless architectures keeps monthly costs minimal while providing real-world experience.

IPAM System

$0.60/month

Lambda + DynamoDB (within free tier)

Portfolio Website

$0.50/month

S3 + CloudFront

VPN/Transit Gateway

$0.00

LocalStack testing (destroyed after demos)

Total Monthly

~$1.10

Running production projects