How will we roll out AI tomorrow morning and this next year? There are two answers here:

1.  For training and customer support of complex systems, the operator's guides for each are now able to be fed to any of several existing limited (not large) language models (micro-LLM's) that are restricted to the user property's account.  Within minutes or less from creation, any user on the local system is now able to ask natural language questions that might concentrate on a single task or combination of tasks.  This is a long way from context-sensitive help and will greatly reduce vendor customer service calls for those properties.  The work that is necessary by the vendor is relegated to merely uploading the documentation and any subsequent training materials.

2.  For complex questions by staff or management, there are now and will be more vendors linking micro-LLM's to accounting systems, reservations and front office databases, CRM, sales department contracts, emails sent and received through the system, messaging from guests and within the property and more.  The challenge for these vendors will be to build "micro-walls" so that some staff will have access to restricted data in the micro-LLM.  As I experience various vendors working on these systems today, and there are more than a few, I have not yet run into one worrying over these micro-walls.  That will change. And at the speed we are seeing vendors use new tools from major vendors such as Google and Microsoft to create their LLM's, this will mature quickly - perhaps by the time you read this.

It's a most exciting time for this and every industry.  To see one of these micro-LLM's in action, I've loaded all 14 of the books I've authored and all of my years of blogs into a micro-LLM which you can play with at www.berkus.com or www.berkonomics.com.