As part of the Georgian College AIDI 2024 cohort, I contributed to Process Optimizer Pro — an AI-driven business process optimisation tool built for Tarsi Group. My piece was the AI Readiness Assessment component: a full pipeline that takes structured company profiles as input, scores them across five dimensions, and uses Meta's LLaMA 3 8B to generate plain-language advisory recommendations for each organisation.
This wasn't a lightweight wrapper around an API call. I loaded and ran LLaMA 3 8B locally using HuggingFace Transformers with GPU detection, designed the scoring logic from scratch, and engineered the prompts that fed company context into the model. The pipeline was then run against 60 real companies and the results exported to Excel for Tarsi Group.
Four stages — CSV input, scoring engine, prompt engineering, LLM inference — all chained into a single notebook that exports to Excel.
import torch from transformers import AutoTokenizer, AutoModelForCausalLM # GPU if available, CPU fallback device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # Load Meta LLaMA 3 8B via HuggingFace MODEL_ID = 'Undi95/Meta-Llama-3-8B-hf' model = AutoModelForCausalLM.from_pretrained(MODEL_ID, token=HF_TOKEN, device_map='auto') tokenizer = AutoTokenizer.from_pretrained(MODEL_ID, token=HF_TOKEN)
Each company was scored across five sections. I designed these weights to reflect that data readiness is the most critical blocker for AI adoption, followed by goal clarity and strategic commitment.
| Section | Dimension | Max Points | Key factors |
|---|---|---|---|
| 2 | Technology Use | 15 | Number of tech functions, CRM/ERP/Analytics stack presence |
| 3 | Data Readiness | 35 | Data collection, structure, update frequency, storage quality |
| 4 | AI Awareness & Skills | 15 | Dedicated AI team, ML expertise, previous AI projects, familiarity score |
| 5 | Business Goals & Use Cases | 20 | Goal alignment, specificity of AI use cases |
| 6 | Strategic Alignment | 15 | Leadership score, formal objectives, budget commitment |
def calculate_section_scores(row): scores = {} # Section 4: AI Awareness and Skills (15 Points) ai_team = 5 if row['AI Team (Yes/No)'] == 'Yes' else 0 ai_expertise = 5 if row['Employees with AI/ML Expertise (Yes/No)'] == 'Yes' else 0 prev_projects = 5 if row['AI Projects Implemented (Yes/No)'] == 'Yes' else 0 familiarity = 2 if row['AI Familiarity (1-5)'] == 5 else (1 if row['AI Familiarity (1-5)'] == 4 else 0) scores['ai_awareness'] = ai_team + ai_expertise + prev_projects + familiarity return scores
Each company's profile was serialised into a structured prompt that gave the LLM full context — technology stack, data practices, skills, goals, leadership score, and the calculated readiness percentage — before asking for advisory output.
def create_prompt(row): prompt = f""" You are an AI integration consultant. Based on the AI Readiness Assessment Form for {row['Company Name']}: 1. Technology Use: {row['Technology Functions']} (Stack: {row['Technology Stack']}) 2. Data Readiness: {row['Collect Data (Yes/No)']} — Types: {row['Types of Data Collected']} 3. AI Awareness and Skills: Score {row['AI Familiarity (1-5)']} out of 5 4. Business Goals: {row['Business Goals with AI']}, Use Cases: {row['AI Use Cases']} 5. Leadership Alignment: {row['Leadership Alignment with AI (1-5)']} out of 5 Overall readiness score: {row['AI_Readiness_Percentage']}%. Recommendation: {classify_readiness(row)} """ return prompt
"Based on the assessment, Stellar Logistics demonstrates Moderate Readiness for AI adoption. The organisation has a strong foundation with a robust technology stack (Salesforce, SAP, Tableau, Alteryx), structured operational data updated hourly, and a dedicated AI team. Key strengths include well-defined use cases in predictive fleet maintenance and real-time delivery tracking, supported by strong leadership alignment (5/5).
To move toward High Readiness, focus on closing the gap in formal AI objectives documentation and ensuring budget allocation is formalised. A pilot project in predictive maintenance for fleet — where data quality is strongest — is the recommended entry point. This would provide measurable ROI and build internal confidence before scaling to full delivery tracking integration."
The chart below shows readiness scores for every company assessed, ordered highest to lowest. Blue bars are Moderate Readiness (≥70%), orange bars are Low Readiness.
No company scored above 76% — even the most mature organisations had gaps, typically in formal AI objectives or dedicated teams. Leadership alignment (Section 6) was the single strongest predictor of a higher score. Companies scoring 4–5 on leadership consistently landed in Moderate Readiness regardless of size or technical stack. The lowest scorers were small businesses under 60 employees using Excel as their primary analytics tool with AI familiarity of 1/5 — for these, the recommendation was consistent: foundational data infrastructure first.
I designed and implemented the five-dimension scoring framework, wrote the weighted scoring logic in Python, engineered the LLM prompts, integrated and ran LLaMA 3 8B locally, and handled technical documentation and knowledge transfer to Tarsi Group. The Excel results file with all 60 assessments and LLM-generated recommendations was the final deliverable.