The last session of the day was a full house because it was all about the architecture of search in SharePoint 2013. The good news is, FAST is the search service in 2013 regardless of which version of the product you are using. When you are using search in SharePoint 2013, you are using the FAST technologies. Another interesting fact is that Exchange 2013 is also utilizing the same search components under the hood which helps explain why we are also getting the improvements in eDiscovery across these platforms. As for the session they focused on the following components of the architecture
How content gets into the search system, the crawl components now have a new feature called continuous crawl which enables the system to be in a perpetual state of crawling content outside of the traditional incremental and full crawl processes which can help LARGE repositories out when full crawls were not a possibility because of the amount of time it takes to complete. During the demo of this section they showed off a feature which they called “Smart Author / Title” in which the search system was able to imply the title of a document without that metadata field actually being set. This is going to automatically help search relevancy for those environments that have issues with valid metadata.
The index core is now where the bulk of the FAST technology comes into play. The old 2010 property database is no longer around where all index information is stored on the local indexed node. No more do you refer to the architecture as rows and columns, for those of you that needed to deal with FAST Search Server for SharePoint 2010, they are now referred to as partitions and replicas.
The best part about this section of the session was the demo in which they worked with the new Index Schema where you work with the Crawled Properties and Managed Properties to create a new refiner. Good news is, after setting up your Managed Property correctly, the search refiner web part now shows the available refiners in a pick list instead of having to configure it all with XML.
The query component section refers to the area in which information comes out of the search system. Now we have the ability to use REST, OData API’s, CSOM and SSOM as tools to pull this data and use it effectively. New web parts such as the Content Search web part in conjunction with the result templates (or rendering templates) will be able to be used by site administrators to help solve problems like site collection boundary issues that required other custom code in the past.
Another fun part of this section was talking about Query Rules and how powerful they can be to help match search items during query time. The demo shown was using terms like “slide” or “deck” which will help match PowerPoint presentations and bubble those items to the top because they are a known type of item to work with. We will all need to further understand how we can use these rules to our advantage as 2013 search matures.
The last section around search is all of the Analytics now live within the search infrastructure. This allows the service to highlight areas that people are finding through the search function and also helps with the performance issues that the Web Analytics Service Application had back in 2010. The best news about this feature is the more that the system is used, the more the analytics service learns which ends up serving the farm and your end users at the end of the day by surfacing more relevant results. With this functionality we can now provide recommendations to users like the “People who viewed this page also viewed this other one”.
After looking through this architecture and some discussions through the sessions, it is important to understand that search is everywhere in 2013 so even if you think you may not be using it… you probably are.