Response Time: Over a Week to Under Three Days. Routing Accuracy: Over 95%.
A county government IT department in the Tampa, FL area responsible for managing service requests for a population of roughly 500,000 residents.
A county government IT department in the Tampa, FL area responsible for managing service requests for a population of roughly 500,000 residents.
What They Were Facing
A county government IT department in the Tampa, FL area was responsible for managing service requests for a population of roughly 500,000 residents. The volume was staggering: tens of thousands of service requests per month flowed in through phone, email, web forms, and in-person visits. These requests covered everything from pothole reports and building permit inquiries to utility billing disputes and code enforcement complaints. The average response time had ballooned to over a week. Residents were frustrated, county commissioners were asking questions, and front-line staff were drowning. A significant part of the problem was routing. Nearly a quarter of incoming requests were being sent to the wrong department on initial intake, which meant they'd sit in the wrong queue for days before someone realized the mistake and re-routed them. A pothole report sent to the parks department instead of public works might not get corrected for a week. The intake process itself was almost entirely manual. Staff members read each request, decided which department should handle it, assigned a priority level, and entered it into the county's work order system. During peak periods (after storms, at the start of budget cycles), the backlog grew faster than staff could process it. The county had tried adding temporary workers during spikes, but training someone to route requests accurately required understanding the county's organizational structure, which took weeks to learn. The county's IT director had a modest budget and a mandate to show measurable improvement before the next budget cycle. Enterprise government automation platforms were evaluated but exceeded the available funding by a factor of three. They needed something purpose-built, affordable, and deployable within a single fiscal quarter.
Tens of thousands of service requests per month across phone, email, web forms, and in-person visits
Average response time had ballooned to over a week with frustrated residents and county commissioners
Nearly a quarter of incoming requests routed to the wrong department on initial intake
Manual intake process could not keep pace during peak periods after storms or budget cycles
Enterprise government platforms exceeded available budget by a factor of three
How We Solved It
We started by analyzing six months of historical request data: 72,000+ records showing what came in, how it was classified, where it was routed, how long it took to resolve, and where errors occurred. The patterns were clear. Roughly two-thirds of incoming requests fell into a dozen predictable categories that could be routed based on keywords, location data, and request type alone. Another quarter or so required some interpretation but followed patterns the system could learn from historical routing decisions. The remainder were genuinely ambiguous and needed human judgment. We built an intelligent intake system that processes incoming requests regardless of channel. The system reads the request content, extracts key information (location, issue type, urgency indicators), classifies it against the county's department taxonomy, and routes it to the correct team with a confidence score. High-confidence requests go directly into the department's work queue. Lower-confidence requests go to a triage queue for human review, but with a recommended routing and the reasoning behind it. For the citizen-facing side, we built a web portal where residents can submit requests, check status, and receive updates. The portal uses the same classification engine, so it asks follow-up questions based on the issue type to gather the right information upfront. A pothole report prompts for location and severity photos. A billing dispute prompts for the account number and billing period. This front-loaded data collection eliminated much of the back-and-forth that had been adding days to resolution times. The quality assurance layer monitors routing accuracy, response times, and resolution rates in real time. Department managers get weekly dashboards showing their team's performance against service level targets. The system also flags requests that have been open beyond their expected resolution window, preventing items from falling through the cracks.
Analyzed 72,000+ historical records to identify classification patterns across 12 predictable categories
Intelligent intake system processing requests from all channels with confidence-scored routing
Citizen web portal with issue-specific follow-up questions for front-loaded data collection
Classification engine routing roughly two-thirds of requests automatically with about a quarter going to assisted triage
QA layer monitoring routing accuracy, response times, and flagging overdue requests in real time
Measurable Outcomes
Quantifiable improvements delivered within the project timeline
Average response time reduced from over a week to under three days
Improved from roughly 75% to over 95% on initial classification
Over 70% reduction in staff time spent on sorting and routing
Roughly a third of requests now submitted through the citizen portal (up from 0%)
Dropped from nearly a quarter to under 5% of incoming requests
Backlog during peak periods reduced by over half
The under-three-day average response time represents time-to-first-action, not time-to-resolution (which varies widely by request type). The improvement came from two sources: faster routing, which eliminated days of sitting in the wrong queue, and better initial data capture, which reduced the back-and-forth before a department could start working on the request. The over-70% reduction in staff sorting time freed several full-time equivalent positions worth of capacity. The county didn't eliminate positions; they reassigned staff to case management and direct citizen services. The IT director's presentation to the county commission showed a cost avoidance of several hundred thousand dollars annually from the reduced need for temporary staff during peak periods. The citizen portal was a bonus that exceeded expectations. Within three months, more than a third of all requests were coming through the portal rather than phone or email. These portal-submitted requests resolved significantly faster on average because they arrived with complete, structured information rather than a vague voicemail or one-line email.
Implementation Timeline
A structured approach from discovery to deployment
Analyzed 72,000+ records and mapped department routing
Weeks 1-2Analyzed 72,000+ records and mapped department routing
Built multi-channel intelligent routing system
Weeks 3-5Built multi-channel intelligent routing system
Web portal with issue-specific intake forms
Weeks 6-7Web portal with issue-specific intake forms
Real-time performance tracking and overdue request flagging
Weeks 8-9Real-time performance tracking and overdue request flagging
Connected to county's existing work order platform
Week 10Connected to county's existing work order platform
Piloted in two departments with staff training
Weeks 11-12Piloted in two departments with staff training
County-wide deployment with ongoing monitoring
Week 13County-wide deployment with ongoing monitoring
Frequently Asked Questions
How did you handle the wide variety of request types the county receives?
We didn't try to build a rule for every possible request type. Instead, the system learns from historical routing decisions. For the dozen most common categories (which account for roughly two-thirds of volume), we built explicit classification rules. For less common categories, the system uses pattern matching against similar past requests. For truly unusual requests, it routes to a general triage queue. The system's accuracy improves over time as it processes more data, and human corrections on misrouted items feed back into the model.
What happens during major events like hurricanes when request volume spikes dramatically?
The system scales with volume. During a simulated surge test using historical hurricane-related request data, it processed 4x normal volume without degradation in routing accuracy or speed. The citizen portal actually becomes more valuable during surge events because it captures structured data at intake, reducing the load on phone lines and walk-in offices. The county has also pre-configured surge-specific routing rules that activate during declared emergencies.
Is the citizen portal accessible to residents with limited technology access?
The portal is mobile-responsive and works on any smartphone with a web browser. It supports English and Spanish. But the phone and in-person intake channels remain fully operational. The portal is an additional channel, not a replacement. Requests from all channels feed into the same classification and routing system, so the quality of service is consistent regardless of how a resident contacts the county.
How do you measure routing accuracy when some requests could reasonably go to multiple departments?
Good question, and it came up during the discovery phase. We worked with department heads to define primary and acceptable-alternate routing for ambiguous categories. A request that goes to either the primary or an acceptable alternate department counts as correctly routed. The over-95% accuracy figure uses this definition. Under a strict primary-only definition, accuracy is still nearly 90%, which represents a major improvement over the roughly 75% baseline.
Services Used in This Project
Ready for Similar Results?
Schedule a discovery call to discuss your specific challenges and learn how we can deliver measurable outcomes for your organization.
Schedule Discovery Call