“You’ve never heard of the Millennium Falcon?…It’s the ship that made the Kessel Run in less than twelve parsecs.” — Han Solo
“Rrrrrrr-ghghghghgh!” — Chewbacca
Developers notice small details that many people don’t. Maybe it’s from debugging thousands of lines of someone else’s code. Maybe it’s from trying to explain to our (non-developer) friends how Bowcaster works.
Well, our developers at Tiny noticed that our spelling technology was slow at worst and average at best. Andy Herron, Editor Platform Technical Lead worked with the Cloud Platform team to investigate. Our own Han Solo and Chewbacca combo.
What we found was that our editor was waiting until the user finished scrolling to check if words had appeared (delaying the processing of words), and requesting all spelling suggestions for words it found, rather than first checking if the word was spelled correctly (causing extra load on the server). The result of this was a noticeable delay before showing error highlights when editing a large document. Andy saw this as an opportunity to improve the speed of the editor and Cloud Services saw this as an opportunity to optimize our network usage.
Why is all of this important? More and more of our customers use our spelling technologies through mobile devices. Developers are creating applications for use in rural areas, on factory floors, and on the street between sales visits. They need speed and mobility and just can’t wait.
The first task was to see if we could send the server a list of all words at once. This would improve the scrolling experience, as misspelled words would be already highlighted. During initial development we avoided this due to performance concerns. The question we asked ourselves was, “Would the browser be fast enough to extract those words?”
We scratched our heads. Our “a-ha” moment came when we realized that our word count feature was performant, which combined with improvements made to the spelling engine meant we were able to both extract and tag words very quickly.
We were now sending a request for all words in the document, and at the same time we swapped to requesting simple true/false flags for each word instead of requesting word corrections as well. This might seem excessive, but with the reduced server load local responses were coming back faster than the previous solution which sent queries for less words at a time! However, while our cloud metrics showed improvements the client side still had odd latency issues.
It turns out the response data was often large enough that it triggered different TCP transmission handling. A combination of chunking our requests into 1000 word blocks and compressing the response with gzip fixed this fairly quickly.
But how can we be even faster? How can we be the fastest? What is the fastest possible? So now our innovation pipeline includes not just the server but how we can improve client side efficiency.
The server improvements are already live, and we are looking at rewriting our Spell Checker Pro plugin to include this functionality.
Yes, enabling gzip for the responses, adding multithreading capability for our spelling servers will increase speed. But there is nothing like the collaboration that occurs when our editor team works with our cloud team to achieve the best results for our customers.
But none of this answers the question of whether platform devs are Han Solos of today and cloud ops the modern Chewbaccas? Well, they’ve both been to Bespin. Let us know in the comments below!
Co-authored with Andy Herron, Technical Lead on our Editor Platform