Video: AI That Actually Works with Sitecore: Introducing the MCP Server

Anton Tishchenko presented the Sitecore MCP server on the Konabos webinar on the 10th of September. He told the story about forcing AI to work with Sitecore and showed a few use cases on content creation, translation, and automated integration with Jira and n8n.
You can check presentation here.
Autogenerated transcription
00:00:02 Hey everyone, uh thank you for joining for the webinar today. It's about the AI that actually works uh with Sitecore. Introducing the MCP server. Uh I'm here with Anton and true demo fashion. What happens is uh Anthropic is down this morning. So part of the demo might not really work. it, you know, obviously we're dependent on AI agents and stuff to get the work done. And as you can see, the anthropic status says that it's down. It's basically giving errors intermittently and not letting us do
00:00:38 things. So, we'll let Anton continue as much as he can explain the concept and stuff and when we get to the parts where it's the demo, it might or might not work. So, with that, take it away, Anton. Um, yeah. Okay. Uh so in this case uh I will start with presentation and uh we will hope that uh it will be it will be fixed because uh uh entropic part will be in uh 30 minutes. So probably it will be resolved but uh for now I can't even get to my entropic API keys and that's why uh
00:01:24 probably some part of demo will not be available but still I presented uh I prepared a lot of uh slides I prepared a lot of uh content So even even if uh uh demo will be failed, I still uh will be able to tell you a lot of uh interesting things. So let's uh start and uh today we will talk about artificial intelligence large language models model context protocol sitecore and uh sitecore model context protocol server the AI that really works with sitecore and uh by the way previous Konabos webinar with Marcelo 11 finished with
00:02:27 question about MCP servers and Corser. Today I will partially answer that question. Let me introduce myself. My name is Anton Tishchenko and I'm CTO and co-founder of uh boutique sitecore development company EXDST. I'm already 12 years since development and I started as Sitecore employee. I was recognized as Sitecore MVP for seven times in a row starting from 2019. And you probably heard about me if you read something about Sitecore and Astro or about uh sitecore MCP server about which we will talk today.
00:03:22 Everything started in April this year. I was user of visual studio code and GitHub copilot at that time and uh they introduced support of uh model context protocol. I tried it with databases and got this wow moment and uh I immediately wrote this message. At that time no one was working on sitecore MCP support neither sitecore nor community that's why I decided to do it by myself however everything started much more earlier I'm early adopter of AI techniques to work with code I used visual code with
00:04:21 different AI tools, GitHub, Copilot, continue, client, group code and majorly I used them as advanced after complete and they were already awesome in uh 2024 but awesome only for generic software development. They were useless for sitecore. they either didn't know anything about sitecore or hallucinated a lot. So I decided to fine-tune uh existing open rate large language model. I took quen coder. I prepared data set. Uh data set uh contained questions and answer from sitecore st exchange. Uh also data set
00:05:16 had site ccore documentation and some blog posts. I equipped with powerful GPU and run fine tuning. The results were mediocre model became more sight aware but there were problems. Uh local openweight models they are always worse than large language models that provided as services. Quen coder could not compete with entropic load or uh chat GPT and small models are not smart enough and the hosting and fine-tuning of big models is unreasonably expensive and makes sense only for big companies. And another problem is uh cut off of
00:06:18 training data. You need uh to retrain model uh if some new data appears also you need to retrain model if you get new model and knowing all these problems uh and level of effort I stopped that experiment but uh you still can find data sets and these models on hagging phase. So what about large language models that provided as a services? They were very good and in generic software development in 2024 much more better than openweight models. But what about sitecore? The same. They were useless for
00:07:15 sitecore. very proactive but tended to hallucinate and uh as I already had data set for my quen code training I used it for creation custom chat GPD I provided prompts forcing uh less hallucination and using information from data set and prepared custom GPD it's some kind of retrieval augmented generation. Large language model became better much better with secord tasks. But even if large language model was good at some situation, it felt wrong. You want an assistant. You got the AI assistant
00:08:13 but you feel like the slave of AI assistant. It doesn't do anything. It tells you what to do. That's uh that's felt wrong and it was obvious it was obvious that next stage for large language models will be tools. But uh at that time it wasn't so easy with tools. If you want to integrate sitecore you was able to integrate sitecore but you need to write integration. If you need uh to integrate Jira, you are able to integrate Jira but you need to write integration and so on for design Figma
00:09:01 for messaging Slack for browsers Chrome Firefox databases and by the way that that was the reason why LLMs didn't have on first few slides But still it was totally possible to uh integrate any external systems as tools even before model context protocol. You were able to add sitecore tools to your agents. However, I haven't seen any examples how to do it and you even were able to combine it with other tools like Jira. But once you get another language model, let's say entropic load,
00:10:02 you need to repeat it again. You need to write separate integration for sitecore and add additional integration for Jira. Probably you already got where I'm going. We have M multiply N problem. We have M large language models and we have N tools. And in order to integrate everything with everything we need to write multiply n generations that's huge amount and uh there is no surprise that tools didn't become popular earlier. No one wanted to do this amount of work and model context protocol
00:11:00 solves this issue. It standardized the way how large language models should work with external world. So now you can write just uh integration one time and your integration should work uh exactly the same with large language models in some kind. Now large language models hands are standardized and model context protocol describes resources, tools and prompts. Resources it's something uh static. It's something that doesn't require any computation and uh something that doesn't change application state.
00:11:58 Prompts are advanced queries. When you don't want to type long messages to large language models, what to do? You should use prompts and tools. It is something that either requires computation or changes application state. And now we can write integration for our system and use it with all large language models. However, that is in standard reality differs. In reality, major part of model context uh protocol clients doesn't support the prompts and resources. That's why uh we decided to do
00:12:55 everything to be tools and even something should be resource for example documentation. It is a tool for our case because in this way we made our model context protocol server to be more universal and it allows uh to work with major part of clients. So everything started in April this year and there were a lot of things done since April. There are more than 100 tools actually 146. Uh we cover all item service API, GraphQL API, all PowerShell comments. We have uh two documentation tools for Sitecore PowerShell and Sitecore CLI.
00:13:58 Everything is built using uh GitHub actions and uh delivered as npm package and Linux and Windows Docker containers. Sitecore mcp server supports both XM XP and XM cloud. All that you need is uh to enable API to allow Sitecore MCP server to access your sitecore. So it should work pretty with all sitecore versions. And by the way, that was uh implementation of sitecore MCP server. That was uh the best way to learn all uh sitecore powershell comments. I I I literally tried all sitecore powershell comments and h how many devs did it
00:15:02 before? So uh taking chance I want uh to say thanks uh for sitecore power model for all people who worked on it especially to Adam and Michael 100 plus tools is a big number but what's included there are a lot of oper a lot of tools to get item. Uh you can get it by ID, by pass, by query, by graphql query, by search. You can create, update, delete items. You can perform uh advanced operation with item. You can publish it. You can assign workflow. You can uh run workflow action. You can find referers and
00:16:01 referers. You can assign template. You can modify template by adding uh changing removing base templates. You can uh uh create update language and numeric versions. You can work with presentation. You can add rendering to the page. Change rendering data source. Change uh rendering parameters. You can create, read, update, delete layouts, renderings, placeholders. Also, uh large language models uh get advanced tools to work with security. Uh they can create, read, update, delete domains, roles, users. Also they can uh change
00:16:55 item assess rules to configure security who can read and who can write on items in order to troubleshoot. Uh you get access to sitecore logs. So large language model can read logs can see that something is wrong and can suggest you a fix. And there are few tools for documentation. It's sitecore cla and it is sitecore powershell. For Sitecore CLI, we decided to make it as documentation because uh uh clients uh are currently quite good with terminal and uh they can call uh sitecore CLI without tools and
00:17:47 uh they need just guidance, they need just rules how they should call this uh CLI. And for PowerShell uh sometime when you need some bulk processing it could be more efficient to run script and uh then uh then pass it to Sitecore MCP server to get result and not just call uh multiple uh site core MCP tools. That's why we have also tool for sitecore powershell documentation. So there is a lot of things that you can use and who can use it. Of course the first group is developers because AI adoption across developers is higher
00:18:42 than in other groups. But there are other groups that can use it for testing, for translation, for content creation. Let's check few use cases. The most obvious one translations. I think that major part of translation in the world is done by AI. But you may ask why do you need the model context protocol for it? And there are few reasons. The first reason is that you don't need to write any line of code. You need just at large language model configure language model equipped with Sitecore MCP
00:19:36 server. It's smart enough to find all data sources for your page and translate them. Another uh advantage of using model context protocol is uh freedom to choose large language model that in some kind it's a composibility remember this buzz word that was very popular few years ago and with large language model with uh MCP you can choose large language model that works better for you. You can choose Antropic, Clade opus, Cloud Sonnet, OpenAI, GPT 4, GPT5 or Google G Mini. And if you're concerned about the
00:20:30 privacy, you can use uh some self-hosted models like uh Qwen coder or GPTO version and you are not stuck with your choice. New model appears and you can switch to new model on the same day. And now you choose the way how to translate not your translation service providers. And if you are not happy with large language model translation quality, you can add uh another MCP server that uh will provide translation services. For example, DEL MCP server. And all of this you get with no extra call with no extra cost. You pay only
00:21:25 for large language models. No additional fees to any service providers. Another area when where I expect uh set MCP adoption is software development. The first sample that you can easily scaffold components. There is a great article from Jeroen that will be present in the links uh on the end of presentation uh where he wrote just one prompt. He he wrote just one prompt and with this one prompt he was able to create rendering itself, rendering item, data source template item, rendering parameters item
00:22:17 and then assign this rendering to the page and uh prepare test content for it. And you can even improve this way. You can add two model context protocol servers. Sitecore model context protocol server and Figma model context protocol server. And you can specify what frame in Figma should be used as design. And in this case you will get not some just abstract design that follows specification. you will get your design and you can repeat it for all your components and that's uh how you can scaffold the whole
00:23:06 website in uh few days. That's uh incredible speed for sitecore websites and this technique allows you fast prototyping. Uh probably you wanted to try something before but you didn't dare because uh it may take a lot of time and now you can easily try it because uh and you will not burn the whole project budget. Another thing how you can use sitecore it's uh rubber duck debugging. But now your rubber duck isn't silent. Now it can check logs. It can check items. It can check code and maybe
00:24:01 provide it with you with fixes or some ideas. And another way how to use sitecore server is everything content afters can uh create content, search for the content, modify existing. There is another one great article from Yon Brewer about content migration. He used uh two model context protocol servers. one for Sitecore and second for Umbraco. And he was able to migrate content from Sitecore to Umbraco and from Umbraco to Sitecore without writing with writing zero lines of code and it shouldn't be necessary. It should be it
00:24:57 can be any external data source. And another cases could be if you are quality assurance engineer, you can easily create a lot of test content to test different cases, different renderings, uh different languages, different amount of content on the page. And uh that are just few example of usages. You are limited only by your imagination and your needs. But uh model context protocol isn't magic wand that solves everything. So let's talk about few cases when uh it doesn't work well. First sample is uh sitecore graphql.
00:26:00 If you if your website is using sitecore sxa and most probably it is because it's recommended way to write sitecore websites for last few years. In this case you will get huge graphql schema and uh the size of graphical schema will be bigger than uh 200k tokens and it doesn't fit to context window for many large language models. I haven't found easy way to split graphql scheme on parts uh with uh your templates and system templates. uh it's possible to do but it requires site core modification and uh I didn't
00:26:51 want to sitecore uh modification because I wanted to leave uh model context protocol server to be site core agnostics and work with any site. So if anyone from sitecore is watching this webinar register this as feature request. It would be nice if we will have full schema like it is now and partial schema that is related only to your project templates. It will open additional abilities to make sitecore even more AI ready. temporarily uh you can use other large language models that has uh larger context window for example
00:27:47 Google G mini which has context window 1 million tokens but uh that still could be not very efficient and could be relatively expensive. Another problem was introduced by ourselves. Uh I decided to write too many tools. Uh the idea was uh to cover all sitecore item API, GraphQL API and all sitecore powershell comments. And uh we got too many tools. For example, uh large language model can get item by graphql, item service, powershell, by ID, by search criteria, by query, bypass and it starts to use
00:28:40 all tools at once. Um I call it AI procrastination. So be aware about this problem and at second phase we plan to split set MCP server on full variant and basic variant but it's not blocker for you because you can uh configure what tools to use. I think all mo model context protocol clients support it. At least I haven't faced with any clients that doesn't allow to do it. And so if you work with content, leave just item service API tools. If you don't use Google G Mini, disable GraphQL. If you work with
00:29:30 presentation, enable presentation tools. If you work with security, enable tools for security. If you are working on bug fixing, uh enable log tools. So use only what you need and you will get really great results. Another challenge is uh sitecore complexity. Sitecore is complex. For example, let's take a presentation. You can configure presentation using page design, partial design. You can configure it on uh standard item values. You can uh configure it on branch templates. Part of renderings could be
00:30:25 on the final layout and part could be on the shared. It's not easy. And uh in order to get more from Sitecore and AI right now, do not over complicate your website. If you want AI to be efficient, make your website as simple as possible. Or you can wait. Now I'm AI optimist and I think that we will eventually get there and large language models will be good even with very complex tasks. So how you can run it? You have multiple options to run it. If you want to run it locally, the best way
00:31:28 is to use npm package. If you host your core in uh containerized environments, you have Windows or Linux images to start your container. And if you want uh to uh change something or tune something for your needs, everything is available as uh source code on GitHub. You can fork it. You can uh change it for yourself and if you think that you did something valuable, I will be glad to receive pull requests. Now what uh client should you use with sitecore MCP? This is start of course is for developers because if you are a
00:32:33 developer probably you already tried cursor or you tried visual studio code with GitHub copilot or at least you heard about them and if you are advanced web coder you can use uh cloud code if you not developer then you can use uh cloud code. So my personal recommendations use entropic models as your large language model. It can be cloud opus or claude at this time and use cors if you are developer and use clo code if you are not a developer. But remember that uh this recommendations uh they are actual in
00:33:28 September. This year everything changes uh too quickly and uh next months probably there there will be something better and my favorite usage of model context protocol is not inside the integrated development environment. It's integration of uh MCP to agent and one of these uh use cases I wanted to show you it's NA10 integration uh we will try hopefully entropic will be up and uh if it's not I will just uh describe how it works. So it's uh it's time for demo. Let me check entropic status.
00:34:34 Uh bad news. Bad news. Uh but but let's let's let's move on. Let's move on. So if uh entropic uh is still down, let me let me start with uh cursor. Uh as as as I already said the easiest way to start with uh uh with uh sitecore MCP server is to use Cursor. In order to start uh I I will recommend you to use uh uh to use our GitHub repository. That's uh that's basically fork of official set demo. The only difference is that it contains demo site based on NexJS and on Astra because you know I'm fan of Astra
00:35:45 and there is a branch where everything is configured uh for you to start with uh model context protocol and uh you need uh to fork it you need to run init PS and then you need to run apps which I actually did before this demo. So I started sitecore locally from this repository and this repository already has sample for uh for model context protocol. So here you can see that uh we have configured the MCP server with uh all access to graphql to item service and uh to powershell and we have Cursor folder where we have
00:36:45 a configuration for this MCP server. It points to MCP server that was started in docker and uh I I I started sitecore before this uh webinar and you can see that the MCP server is up and running. And now uh if I go to cursor settings and to MCP integration, I will be able to see sitecore MCP tools. And uh here I can enable or disable them. And here I'm able to select tools that I want to work. For this demo, I selected just basic tools that are related to item service. Let's uh let's
00:37:44 try them try them in action. Let me start new uh chat and uh let me ask something our large language model. For example, uh what uh sidebar sites are available and uh probably it may try to use uh source code. If it will use source code, we will stop it and uh we will say use tools. Yeah, let's let's stop it and let's ask uh please use uh tools for this question and uh you can see that it started to call uh setcp tools. It get item by pass. Then it get another item. Another item children.
00:38:47 Child children. Time to time it can call uh tools that are not really required but uh eventually it will get there. So what is the result? result is that we have basic financial services websites with some details, languages, uh key features. Let's uh let's go to sitecore and uh check. Uh okay. So here here here I have sitecore. Let me start content editor. Yep we have three websites. We have uh basic uh financial and services. Let me open the homepage for services website. It's one on Astro, second on
00:39:48 NextJS. They they are the same. And let's say I want to change this text uh on this page. It's uh it is saved somewhere in items. uh I don't want to look uh for an item and I can just go to my chat with large language model and say uh please change text text on uh uh services on the homepage of services site. for website from to let's say let it be not dream project let it be your next project let's check uh it started to get items let's see whether it will be able to find this item as as you can see I
00:41:01 haven't specified path. I haven't specified the data sources. I haven't specified any I I just specified the page that I want and the text that I want. I it's configured but I don't care for this case how it is configured. Yeah. Why? While it's sync, uh let's let's check let's check our entropic status. Uh there's another message API cloud AI and console services impacted. The bad thing for us is API. Uh let's let's let's uh let's hope for the
00:42:04 best. And uh what's what our cursor is doing? It's getting item. It's still uh haven't find haven't found the right one. By by by the way uh the most probably here is also used entropic and uh probably that also could be reason for some degradation and uh here you have ability to uh switch agent from after to the model that you font. And uh here here pro probably it switched to some some worse model that's why uh it took so long. It should be just just few prompts but instead
00:43:15 it takes a lot of time. Okay. At least uh uh what's what what does it write? It it's it writes that hero banner. It again started to get children. That's that's uh that's not the that's not deterministic logic with large language model. So let's uh let's let's just try again in new chart. So some something something went went went uh wrong please. Okay, let's let's stop this one and uh let's uh run it again. Uh it started it started to use uh it
00:44:47 started to use items. So let me please use uh cycle tools. Yeah. And now now it should uh start using sitecore tools. because because we don't want to use uh uh to use uh CLI and now finally it should it should do it. So it's it run edit item. So let's check now. You can see that um here we have text for let's build your next project and uh here here you can see also that changed that text was changed. The demo wasn't ideal but uh at least it finally changed. Let's uh try anthropic demo. It's uh
00:46:02 still have problems, but at least uh it's orange. It's not uh red. So, uh what is the interface where everyone is working? It's Jira. Uh that's why we decided to make uh uh AI agents to be able to work in Jira. So let me in this case uh to select uh to to show you uh to use XM cloud. I will uh use uh XM cloud uh uh website where which is basically the same three websites basic financial and services and uh financial and uh I have uh I have website that's running on Vercel where we will be able to see our
00:47:15 ch our changes. Let's uh find financial website and let's uh choose uh some page uh retirement planning for AI. It will be too scary to let it work on this page. Let's uh select something for example uh personal and borrowing and let's translate this page from English to Spanish. As you can see here I have version for English, for French and for Japanese but not for Spanish. Let me copy item pass and let me create the item. Let me name it borrowing Spanish. Borrowing Spanish. And
00:48:18 I need to add description. So translate borrowing. I already miss articles page on pen shell secure website from English to Spanish and let's specify also what kind of Spanish because there could be Mexican and other versions of Spanish and let's specify uh once translate page itself with path to this page and uh translate page data sources. If we do not specify pass for page, most probably it still will work. But it sometimes it can find the page, sometimes it's not. So it's better to
00:49:32 write your ticket in the proper way with the path to the item. But we will not specify all data sources. We will leave uh it for large language model to iterate through them and update all of them. Let me save this uh task. And now let me assign it to AI editor. And who is this our mysterious AI editor? It's uh it's uh our agent that is running on the background and uh it's powered by N8N uh automation tool. Actually it can be AI agent shouldn't be necessary in N8 but uh uh just look how how how it looks. it
00:50:36 it's great for demo. You can visualize your workflow and you can show this workflow. Let's cross fingers and uh API still doesn't work. We will we will start this workflow but uh most probably it will fail because uh because of external services. So I started workflow and I will go through nodes uh one by one and uh at the beginning uh this uh workflow is executed every five minutes. We take all tickets that are assigned to AI editor and which are in AI column which I actually did. I
00:51:34 created task for AI editor. Then we loop over the items and before taking item into the work we double check that it's still assigned to EI editor and it's still in EI column because if you have many items and you you can get to the issue that you want to do in five minutes or so. And then we move uh issue to AI block state. And let's check our board. And uh we should get borrowing Spanish task in uh AI block state. It's meaning for us that uh the page is uh the work on this ticket is in the
00:52:30 progress and now there is interesting part and hopefully for us it's it's it's working. I I hope it will be finished because entropic services are down but let's uh let's let me describe how it works. We have AI agent with prompt and there is actually nothing specific for this prompt. It's just says that you are AI agent that is running in the background. That's why you can't ask uh to clarify all your steps. So please do everything at once or I'll ask about all
00:53:13 your clarifications also at once and once uh ticket was done please move it to AI QA state and please write command for it and uh AI agent also get the issue ID issue name and uh description. Now uh what else is present here? It has large language model for our case. It's entropic cloet model. Clonet 3.7 it's not the advanced one. The most advanced are OPUS 4 or and Sonet 4. But I wanted to show that uh it can work even with not top edge models. And here you can specify chat GPT you can specify local model if
00:54:16 you if you have another note is uh simple memory for our case it's just memory for ticket but uh it can have some uh useful uh usages. For example, uh your AI agent can remember about ticket that they walked months ago, week ago, day ago or this simple memory can be even shared between agents. And we have uh two model context protocol servers. Uh the one is for sitecore with some tools that are available for this server. And here we selected tools that could be useful for content editor. And uh if we have AI agent that is
00:55:21 developer, we will select different tools. And another uh another model context protocol server is Atlassian server. It allows us to move issue from one column to another or add comments to issues. And uh it's still working but we can see that uh it already did some request to sitecore tools. So we can we can check borrowing item and we can uh check whether something was uh translated. Yeah. And we can see that uh now here is uh Spanish version and uh I think that uh there will be even Spanish versions
00:56:18 for some uh data sources and we have this. So let's uh let let me copy pass for borrowing and let me switch to to this page to see Spanish content. Uh yes yes yes and we can see that page is uh in the Spanish and uh that's how it should work and uh uh now we can see that uh it also executed u Atlassian tool just once. So probably probably it either move ticket or wrote comment as ticket is still here. There should be comment. Yeah, we have comment. it uh wrote it 42 seconds ago that uh borrowing page and all data sources were
00:57:38 translated from English to Spanish and the list of content that was translated markdown isn't ideal but it's still uh much more better than major part of humans rights and once Once once it will be finished. Oh, it's it called second time this tool. So, let me let me refresh this page. And now you can see that it it moved this ticket to AI QA state. And now it's time for another our AI agent, our QA engineer. And here is basically the same. Let me execute workflow. And uh everything works the same. We have just
00:58:40 uh even more cheaper model. It's set 3.5. And we have even less uh sitecore tools. For this case, we have uh tools only to read content because we know that QA will be working with with uh afters. And as you can see, if we use cheaper tool and probably they they do not experience problem with uh this cheaper tool, it was finished quickly. And now we can check uh our board and our ticket was moved to a AI done state and our AI QI wrote command that everything was translated and now it's time for human
00:59:32 in loop. Now it's time to assign this ticket to human and either check everything uh check everything on this page whether it was uh translated or if you if you trust your new team members you can move uh it to done and if uh task task was failed for some reason. AI QA will move this task for to AI blocked column. And uh here I prepared sample where I intentionally break translation and AI QA was able to find the problem. That's how it should work. And uh it shouldn't be necessary translation task.
01:00:29 It can be any task. For example, uh you can create page for blockchain or any other topic. You can uh write more content for the page. You can rewrite content. It's it can be anything related to content. But it's not limited to content only. you can equip your AI agent with additional uh model context protocol servers and allow them to develop some code. Why not? So let's let me move to uh slides and let me move to conclusions. So if you tell me two years ago that generic AI large language models will be
01:01:27 able to add renderings to the page, I would not believe you. But here we are. AI is already here and you probably even guessed that it helped me to prepare some parts of this presentations this presentation and model context protocol made a breakthrough in AI. Now large language models are not only machines that you can talk to. Now these machines are capable to perform actions and they are capable to perform actions with sitecore if you gave them access to sitecore MCP server and I still type to time get this wow
01:02:20 moment when uh I see what it's capable to do. So give it a try. It's very easy especially if you are uh copilot or visual studio uh if you are GitHub copilot with visual studio or cursor user and find something that you can optimize and you can delegate to AI agent and feel free to contact me about AI. I site core about uh model context protocol and uh about AI automations at all not necessary only for Sitecore and thanks to everyone who helped me with Sitecore MCP server to everyone who tried it, who left
01:03:18 feedback, who wrote articles, who wrote some code and special thanks to my colleagues Vadym and Stas. And uh the final slide links you can scan this artistic uh QR code. Hopefully it uh it is scannable on any devices. And uh now I'm ready for questions. It took longer because we had the anthropic outage and I'm not sure if we still have time for questions. We do. We do. Thank you. Thank you, Anton, for going through all of this. It was interesting and thank thankfully it
01:04:07 worked. I really like the the geo integration part of it. Um, so this is part of the question I wanted to ask, but I'll ask anyway. So, I know you've done quite a bit of work. What else do you think uh so what do you think is missing and what is the next set of features and I'll add one more question on top of this. So do you know if Sitecore is working on their own MCP server or like do you know anything about that about about official sitecore MCP server? I I don't know exactly. I just heard some
01:04:44 rumors from different people that they are working on official set MCP server. what it will be, when it will, when will it be, I have no idea. And you not necessarily should wait for it. You can uh you can use uh my MCP server. Let me I see that it was it was cut a little bit. So, let me let me probably move my browser a little bit at the top in order to in order to make sure that the QR code is uh scannable. And uh about my plans uh of for sitecore MCP server we have a lot of tools and uh we need uh
01:05:42 to try all of them. Some of tools will be removed. For example, uh we have tool for item service to run storage query and it's even hard to developer to explain that you need to create storage query somewhere and then you can execute it and that's just impossible to explain it to large language models. That's why these kind of tools will be removed and uh also uh we will move the tools that are most useful to some basic package that uh that you don't need to select uh tools uh that you want
01:06:29 to use like for example uh for example in cloud yeah in cloud desktop you start site core and you need to select tools what you want to use and that's not cool thing because you need to click 100 times for tools that you don't want and there will be a basic version where you can just use it and uh that's it. Another plan is uh experiment with GraphQL. There is a big potential in GraphQL and probably I will be able to find the way how to split schema by myself without site core help.
01:07:18 Does it answer the question? Yes. Yes, it does. So the next question is is there a way to know the items affected by the prompt? How to be sure it won't affect more items or the wrong item by mistake? Um so uh for our case for that I showed on uh uh on exam cloud portal uh this you will be able to see that item was updated by sitecore. So you you can control you can uh create users with limited rights with uh assign roles to them that they they should uh have access with uh NA10 you have access to
01:08:19 everything that was executed. For example, if we if we will look to uh executions and the last one and uh we will be able to check uh to check all executions and we will be able to see to see that tools were called uh 14 times and we will be able to see all data that was sent to sitecore. uh model context protocol server and everything that was successfully changed and uh similar thing for cursor when when I run it you you are able to see that it called getting item by pass then it called getting
01:09:15 that then it it called editing item and it updated item with uh this data. So and in Cursor uh in my configuration everything is allowed because uh that's local that's my sandbox and I allow everything but if in default uh cursor configuration it will ask you about which about each tool uh each time. So if it wants to even to get items, you can allow or disallow. And for example, you can allow cursor to access all read tools without clarification. And you can configure to ask about clarification to run edit tools. And
01:10:15 that's uh that's how you will be able to make sure that uh the only required items were affected and uh probably I also at this stage you should start from local from QA from dev from UAT instances but not from production. And once you are sure in your new AI colleagues in this case you can uh start to use them in production. That's the answer. Makes sense. And one one question for me out of curiosity because I'm actually struggling through this. So for the XM cloud part, Anton, um how do you or how
01:11:07 does the MCP server handle the uh handle the management API key for the API calls for like publishing or using the management API, right? It does it handle all of that by itself? Um so uh we we we if you if you talk about uh GraphQL management keys, we do we don't use it. Okay, that's actually great great idea probably we can add. So initially we concentrated on uh tools that will work on both XM XP and XM cloud. That's why there is only item service API uh graphql but uh not graphql management
01:11:55 API just graphql h scheme and uh uh powershell comments that and uh everything is uh all these APIs are configured just once and you don't do not need to manage uh uh uh this case but that's that's great idea. I will think on adding it to some future version. Awesome. Yeah, I have a need um for an external service in Azure to be able to publish an item using the management API key for XM cloud. But the thing is the key needs to be generated pretty much on demand each time. I was just curious. So maybe
01:12:46 I can use the MCP server and use a PowerShell command to be able to do that for now and then at some point I will check. Uh but anyways, this was really really useful. Thank you so much. Thank you for sharing the QR code. Um this presentation will be available on demand on video on both uh YouTube and LinkedIn shortly after this finishes. So thanks again Anton for for all your effort putting this together. Thankfully Anthropic worked at the end so we didn't have to face any issues.
01:13:18 Yeah. And uh thank you thank you for giving me chance to present on your channel on your webinar. It's uh uh I was happy to present it here. Unfortunately uh topic wasn't selected for symposium but here is also decent place for this topic. Yeah we had quite a few people register so I'm I'm sure this will be watched again and again but thank you so much for your time. Yeah thank you for organizing this session. Yep. Bye.
