← Back to Projects
LLM Integration LLaMA 3 Client Work AI Strategy Python

AI Readiness Assessment Tool
— Process Optimizer Pro

Tarsi Group
Georgian College AIDI 2024
Meta LLaMA 3 8B
60
31% – 76%

What this project was

As part of the Georgian College AIDI 2024 cohort, I contributed to Process Optimizer Pro — an AI-driven business process optimisation tool built for Tarsi Group. My piece was the AI Readiness Assessment component: a full pipeline that takes structured company profiles as input, scores them across five dimensions, and uses Meta's LLaMA 3 8B to generate plain-language advisory recommendations for each organisation.

This wasn't a lightweight wrapper around an API call. I loaded and ran LLaMA 3 8B locally using HuggingFace Transformers with GPU detection, designed the scoring logic from scratch, and engineered the prompts that fed company context into the model. The pipeline was then run against 60 real companies and the results exported to Excel for Tarsi Group.

8BLLaMA 3 Parameters
60Companies Assessed
5Scoring Dimensions
100Max Score Points
31–76%Score Range

The pipeline

Four stages — CSV input, scoring engine, prompt engineering, LLM inference — all chained into a single notebook that exports to Excel.

stage 1 — load and stage 4 — LLaMA 3 setup
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

# GPU if available, CPU fallback
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

# Load Meta LLaMA 3 8B via HuggingFace
MODEL_ID = 'Undi95/Meta-Llama-3-8B-hf'
model = AutoModelForCausalLM.from_pretrained(MODEL_ID, token=HF_TOKEN, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID, token=HF_TOKEN)

Scoring dimensions — 100 points total

Each company was scored across five sections. I designed these weights to reflect that data readiness is the most critical blocker for AI adoption, followed by goal clarity and strategic commitment.

SectionDimensionMax PointsKey factors
2Technology Use15Number of tech functions, CRM/ERP/Analytics stack presence
3Data Readiness35Data collection, structure, update frequency, storage quality
4AI Awareness & Skills15Dedicated AI team, ML expertise, previous AI projects, familiarity score
5Business Goals & Use Cases20Goal alignment, specificity of AI use cases
6Strategic Alignment15Leadership score, formal objectives, budget commitment
stage 2 — scoring engine (section 4 example)
def calculate_section_scores(row):
    scores = {}

    # Section 4: AI Awareness and Skills (15 Points)
    ai_team        = 5 if row['AI Team (Yes/No)'] == 'Yes' else 0
    ai_expertise   = 5 if row['Employees with AI/ML Expertise (Yes/No)'] == 'Yes' else 0
    prev_projects  = 5 if row['AI Projects Implemented (Yes/No)'] == 'Yes' else 0
    familiarity    = 2 if row['AI Familiarity (1-5)'] == 5 else (1 if row['AI Familiarity (1-5)'] == 4 else 0)
    scores['ai_awareness'] = ai_team + ai_expertise + prev_projects + familiarity

    return scores

Prompt engineering

Each company's profile was serialised into a structured prompt that gave the LLM full context — technology stack, data practices, skills, goals, leadership score, and the calculated readiness percentage — before asking for advisory output.

stage 3 — prompt construction
def create_prompt(row):
    prompt = f"""
    You are an AI integration consultant. Based on the AI Readiness
    Assessment Form for {row['Company Name']}:
    1. Technology Use: {row['Technology Functions']} (Stack: {row['Technology Stack']})
    2. Data Readiness: {row['Collect Data (Yes/No)']} — Types: {row['Types of Data Collected']}
    3. AI Awareness and Skills: Score {row['AI Familiarity (1-5)']} out of 5
    4. Business Goals: {row['Business Goals with AI']}, Use Cases: {row['AI Use Cases']}
    5. Leadership Alignment: {row['Leadership Alignment with AI (1-5)']} out of 5
    Overall readiness score: {row['AI_Readiness_Percentage']}%.
    Recommendation: {classify_readiness(row)}
    """
    return prompt

Sample LLM output — Stellar Logistics (72%)

LLaMA 3 generated recommendation

"Based on the assessment, Stellar Logistics demonstrates Moderate Readiness for AI adoption. The organisation has a strong foundation with a robust technology stack (Salesforce, SAP, Tableau, Alteryx), structured operational data updated hourly, and a dedicated AI team. Key strengths include well-defined use cases in predictive fleet maintenance and real-time delivery tracking, supported by strong leadership alignment (5/5).

To move toward High Readiness, focus on closing the gap in formal AI objectives documentation and ensuring budget allocation is formalised. A pilot project in predictive maintenance for fleet — where data quality is strongest — is the recommended entry point. This would provide measurable ROI and build internal confidence before scaling to full delivery tracking integration."

Results across all 60 companies

The chart below shows readiness scores for every company assessed, ordered highest to lowest. Blue bars are Moderate Readiness (≥70%), orange bars are Low Readiness.

AI Readiness scores across 60 assessed companies — blue = Moderate Readiness (≥70%), orange = Low Readiness
22Moderate Readiness
38Low Readiness
76%Highest (SmartRetail)
31%Lowest (HomeRenew)

What the data showed

No company scored above 76% — even the most mature organisations had gaps, typically in formal AI objectives or dedicated teams. Leadership alignment (Section 6) was the single strongest predictor of a higher score. Companies scoring 4–5 on leadership consistently landed in Moderate Readiness regardless of size or technical stack. The lowest scorers were small businesses under 60 employees using Excel as their primary analytics tool with AI familiarity of 1/5 — for these, the recommendation was consistent: foundational data infrastructure first.

What I contributed

I designed and implemented the five-dimension scoring framework, wrote the weighted scoring logic in Python, engineered the LLM prompts, integrated and ran LLaMA 3 8B locally, and handled technical documentation and knowledge transfer to Tarsi Group. The Excel results file with all 60 assessments and LLM-generated recommendations was the final deliverable.