Sunday, August 31, 2014

SharePoint 2010 Diagnostic and Usage Analysis Logs best practice

SharePoint 2010 has improved the way the ULS logs information and now these diagnostic logging data can be written to trace files, Windows Event Viewer and a new SharePoint reporting database (new in SharePoint 2010). Trace log files are created in a Log folder under the root folder where SharePoint has been installed by default; that is also called the 14 Hive.

SharePoint diagnostic logging is very important, and extremely helpful when we encounter problems with our SharePoint environment. However, diagnostic logging can be ineffective at times, and can even cause SharePoint performance to slow down if it’s not managed properly. The one thing you should ABSOLUTELY do is move the ULS logs off of the system drive. ULS is designed to stop logging if it percieves a disk space issue and moving the logs off of the system drive ensures that logging isn’t going to fill up the system drive, and that ULS logging isn’t going to contend with your page file for disk IO. Note that in order to change the location fo the log file, the path must exist on ALL SharePoint servers, and the folder’s permissions must be set to allow the SharePoint service to write to it.

There are two sets of logs you want to move in the SharePoint 2010 farm environment, the diagnostic logs and the usage logs.

Diagnostic logs: With Central Admin:

Central Admin > Monitoring > Configure Diagnostic Logging (/_admin/metrics.aspx). The setting is the “Trace Log” path at the bottom. It is recommended changing the Drive letter and leaving the rest of the path alone. It’ll make it easier for you to find things later on.

With PowerShell: You can also use PowerShell to change this. The cmdlet is Set-SPDiagnosticConfig and the parameter is –LogLocation.

Set-SPDiagnosticConfig -LogLocation “E:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\LOGS

Usage logs: With Central Admin:

Central Admin > Monitoring > Configure web analytics and health data collection (/_admin/LogUsage.aspx). The setting is the “Log file location” setting. Set it to the same path you did the Trace Log above.

With PowerShell:

The PowerShell cmdlet to alter this is Set-SPUsageService and the parameter is –UsageLogLocation. Set-SPUsageService -UsageLogLocation “E:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\LOGS\”

ITIL for SharePoint: Defining SharePoint as a Service Using ITIL Service Strategy

Summary SharePoint, by its nature, is a technology--a bundled set of capabilities. It is up to SharePoint owners to determine how to define it and recast it as a service. SharePoint as a service is an approach that recasts SharePoint as a catalog of business-focused services. Treating SharePoint as a service is better (i.e., yields more value) and fundamentally different than simply providing a set of capabilities. This guidance presents an approach to treating SharePoint as an enterprise solution by applying a service management methodology (i.e., Information Technology Infrastructure Library [ITIL]).

What is msvcs.exe Information

This is an undesirable program. This file has been identified as a program that is undesirable to have running on your computer. This consists of programs that are misleading, harmful, or undesirable. If the description states that it is a piece of malware, you should immediately run an antivirus and antispyware program. If that does not help, feel free to ask for assistance on the forums. Name: MsvcService Filename: msvcs.exe Command: msvcs.exe Description: Added by the W32/Rbot-RK worm. This infection connects to an IRC server where it waits for remote commands. File Location: %System% Startup Type: This startup entry is started automatically from a Run, RunOnce, RunServices, or RunServicesOnce entry in the registry. HijackThis Category: O4 Entry Note: %System% is a variable that refers to the Windows System folder. By default this is C:\Windows\System for Windows 95/98/ME, C:\Winnt\System32 for Windows NT/2000, or C:\Windows\System32 for Windows XP/Vista/7. Removal Instructions: How to remove a Trojan, Virus, Worm, or other Malware

What is the OWSTimer.exe process.

Name: SharePoint Timer Service Filename: OWSTIMER.EXE Command: C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\50\bin\OWSTIMER.EXE Description: The SharePoint Timer Service is a Window service that is installed with Windows SharePoint Services. This program is used to handled scheduled jobs related to Windows SharePoint. File Location: C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\50\bin\OWSTIMER.EXE Startup Type: This startup entry is installed as a Windows service. Service Name: SPTimer Service Display Name: SharePoint Timer Service HijackThis Category: O23 Entry

Tuesday, August 19, 2014

Drive Space Monitoring using PowerShell Script

#Purpose: Script to check the drive space in the server against the threshold value and triggers an alert #powershell script for disk space monitoring

$computer = get-content env:computername; #Get the server name $percentWarning = 50; # Store the percentage warning threshold $disks = Get-WmiObject -ComputerName $computer -Class Win32_LogicalDisk -Filter "DriveType = 3"; $mailbody = "The following drive(s) have less than $percentWarning% free sapce in Server: $computer`n`n"; $mailbody+="Drive`t`tTotalspace(GB)`t`t`Freespace(GB)`n --------------------------------------------------------------- `n"; $drivedata=""; $emailsubject="$computer - Low Disk Space Alert!"; # Code to send email to the SharePoint Administrator/Group function SendEmail([string]$msg) { $SMTPClient = new-object System.Net.Mail.smtpClient; $ = "" $MailMessage = new-object System.Net.Mail.MailMessage; $MailMessage.Subject = $emailsubject; $MailMessage.Body = $msg; $MailMessage.From = ""; $MailMessage.To.add(""); $SMTPClient.Send($MailMessage); } #The following block performs the core functionality foreach($disk in $disks) { $deviceID = $disk.DeviceID; [float]$size = $disk.Size; [float]$freespace = $disk.FreeSpace; $percentFree = [Math]::Round(($freespace / $size) * 100, 2); $sizeGB = [Math]::Round($size / 1073741824, 2); $freeSpaceGB = [Math]::Round($freespace / 1073741824, 2); if($percentFree -lt $percentWarning) { $drivedata += "$deviceID`t`t $sizeGB`t`t`t$freeSpaceGB`n"; } } #Email to be sent only if any drive has free space less than threshold value if($drivedata.length -gt 0) { $msg=$mailbody+$drivedata; SendEmail($msg); } Read more:

SharePoint Search and Why Does It Tap Out the CPU at 100%?

In many installations, I’ve found that for some content databases, the SharePoint Search (not FAST – that’s another animal all together) can seem to take an exceptional amount of time to crawl. The causes can be widespread – from errors in the crawled content to broken links. For example, we had one database that is about 70 GB of data. Though fully crawled weekly, the overall time to complete a full crawl was some 12 hours. Incremental crawls took 3 to 4 hours. In addition, whenever this crawl started (full or incremental), CPU would hit 100% and stay there – not only causing performance issues on the site but also as a side effect, it kept tripping the SQL Mirror fail-over. Suspecting a number of issues, we did the following to resolve the problems: First, isolate the content sources so that you can figure out which content database is tapping out when it starts. This will at least help you to narrow down exactly where the problem is (i.e. separating content vs. people search, etc.). Next, check the errors generated by the search – broken links within the index can lead to this problem. A good way to correct this is to determine WHY the links are broken and try to correct the root cause (permissions, bad URL’s, deleted site, etc.). Sometimes simply dropping the indexes and rebuilding them will correct the issue – while in our example, this corrected the errors but not the CPU 100% problem. Next, if you have more than one server available, try changing the server roles to isolate the crawl – this may eliminate the problem or at least alleviate it enough to work on the real cause. Last, check the site for Closed Web Parts – these are parts that end users have closed on pages vs. deleting them. In our case, the last item was the golden ticket. Using a quick PowerShell script, we were able to determine that in this particular site, there were over 651 closed web parts. Since this certainly would be impossible to correct manually, the PowerShell script can be used to delete the parts found. Now this is not without risk – if users intended to close web parts (not sure why but…) deleting them will cause them problems. In our particular case, the end users are not sophisticated enough to be using closed web parts so the risk was minimal. NOTE: You should also check for “bad web parts” – in Central Administration, you can see this under the “Missing Assemblies” message in the Health Analyzer. You can also use the PowerShell script to test the database itself: Test-SPContentDatabase -Name -WebApplication http:// The above will provide a list of missing web parts and other errors if detected. For the Closed Web Parts, here’s a step by step to fix it: 1.Run the following PowerShell script to generate a CSV file that lists all of the closed parts 2.If you have a development environment: 1.Restore the database to a new application using the database attach method (the same as if converting). 2.Setup the content database for search and verify if the problem still occurs (not necessarily CPU but in time to crawl). 3.Run the script to delete the closed parts; afterwards run an IISReset. 4.Drop the Search indexes and run a full crawl as an isolated content source (crawling ONLY the single content DB). 5.Compare times! 3.Once tested in Development or directly in production (if you don’t have a dev you can do that with) 1.Make a full backup of the content database. 2.Run the script to generate the CSV file – verify the number of items found. 3.Log off all users if possible (not 100% needed but good idea). 4.Run the script to delete the parts. 5.Drop the Search indexes and start a full crawl on the content database – monitor the CPU/time. 6.When complete, re-run an incremental crawl and again monitor the CPU/time. This MAY NOT fix the entire problem for you, but it will also not hurt – the potential downside is that it simply takes as long as usual. In our case however, the results were DRAMATIC: Before: •Content DB Full Crawl: 12-13 hours •Content DB Incremental Crawl: 3-4 hours •CPU Usage: 80-100% with sustained peaks •Closed Web Part Count: 651 After: •Content DB Full Crawl: 1 hour 20 minutes •Content DB Incremental Crawl: 2 minutes 40 seconds avg •CPU Usage: 60-100% with NO sustained peaks •Closed Web Part Count: 0 NOTE: This script was modified from another found; sorry our research did not record the original author. SCRIPT TO LIST CLOSED WEB PARTS ON ALL PAGES: # Write the Header line to a new CSV File (this drops it in the current directory): # “Page URL, Closed Web Part Name” | out-file ClosedWebParts.csv # Get all Webs from the Site: # $webs = Get-SPWebApplication “http://” | Get-SPSite -Limit All | Get-SPWeb -Limit All # Loop through each of the Web Sites found (note: you MUST be SCA when running this!) # foreach ($web in $webs) { # Get All Pages from site’s Root into $AllPages Array # $AllPages = @($web.Files | Where-Object {$_.Name -match “.aspx”}) # Search All Folders for All Pages # foreach ($folder in $web.Folders) { #Add the pages to $AllPages Array $AllPages += @($folder.Files | Where-Object {$_.Name -match “.aspx”}) } # Loop through all of the pages and check each: # foreach($Page in $AllPages) { $webPartManager = $web.GetLimitedWebPartManager($Page.ServerRelativeUrl,[System.Web.UI.WebControls.WebParts.PersonalizationScope]::Shared) # Use an array to hold a list of the closed web parts: # $closedWebParts = @() foreach ($webPart in $webPartManager.WebParts | Where-Object {$_.IsClosed}) { $result = “$($$($Page.ServerRelativeUrl), $($webpart.Title)” Write-Host “Closed Web Part(s) Found at URL: “$result $result | Out-File ClosedWebParts.csv -Append $closedWebParts += $webPart } } } SCRIPT TO DELETE CLOSED WEB PARTS ON ALL PAGES: # Write the Header line to a new CSV File (this drops it in the current directory): # “Page URL, Closed Web Part Name” | out-file ClosedWebParts.csv # Get all Webs from the Site: # $webs = Get-SPWebApplication “http://” | Get-SPSite -Limit All | Get-SPWeb -Limit All # Loop through each of the Web Sites found (note: you MUST be SCA when running this!) # foreach ($web in $webs) { # Get All Pages from site’s Root into $AllPages Array # $AllPages = @($web.Files | Where-Object {$_.Name -match “.aspx”}) # Search All Folders for All Pages # foreach ($folder in $web.Folders) { #Add the pages to $AllPages Array $AllPages += @($folder.Files | Where-Object {$_.Name -match “.aspx”}) } # Loop through all of the pages and check each: # foreach($Page in $AllPages) { $webPartManager = $web.GetLimitedWebPartManager($Page.ServerRelativeUrl,[System.Web.UI.WebControls.WebParts.PersonalizationScope]::Shared) # Use an array to hold a list of the closed web parts: # $closedWebParts = @() foreach ($webPart in $webPartManager.WebParts | Where-Object {$_.IsClosed}) { $result = “$($$($Page.ServerRelativeUrl), $($webpart.Title)” Write-Host “Closed Web Part(s) Found at URL: “$result $result | Out-File ClosedWebParts.csv -Append $closedWebParts += $webPart } # Delete Closed Web Parts # foreach ($webPart in $closedWebParts) { Write-Host “Deleting ‘$($webPart.Title)’ on $($$($page.Url)” $webPartManager.DeleteWebPart($webPart) } } } by Sterling International Consulting Group

Understanding IIS Bindings, Websites, Virtual Directories, and lastly Application Pools

In a recent meeting, some folks on my team needed some guidance on load testing the Web application that one of my feature teams are developing. The questions on load testing subsided rather quickly and prior to pluggin my ears with my headphones for my ZuneHD, I was stepping out of the room when one said, “Can I ask you a question about connections in IIS…” This simple question led to a 20 minute conversation … which resulted in this blog post and some serious deja vu for me as it was a flash back to my IIS days. Bindings: Did you say “Bindings?” So you’ve been tasked with development of a new Web application to be hosted on IIS (any version)? The first thing on your mind is usually the design of the Website, how the application will interact with the middle-tier, and usually security. This is a great start in the design process. However, let’s not forget that often jumping into this level of design will mean that later on your going to make some other decisions a bit more tricky. It starts with these questions: 1.Am I going to host everything in one IIS Website? 2.Will I use an “existing” Website like the Default Web Site or create my own? 3.Will some of the site require secure authentication using SSL? The first thing that often happens with developers posed with these questions are they say these aren’t important but I quickly smile and say, “We’ll see”. The primary reason that these questions are important are around the fact Websites are accessed by every client using bindings. The end-user of your Web application(s) don’t know they are using bindings because they are usually hidden behind a nice, pretty “Web address” using DNS. If you don’t have the answer of how many Websites your Web application will utilize then you are going to be struggling when you are upset that you are limited to “rules” governed by directories. You see, Websites have something called Server Bindings which represent the underlying address, port, and potentially a host header that your Website is accessed using. Do you think that HR staff would be happy if their Website is accessed using the same bindings as your company’s intranet? I would venture to guess the answer is no. Bindings 101: A typical binding for Websites are in the form of IP:Port:HostHeader. For all versions of IIS that anyone reading this in 2010 care about (version 6.0 and higher), the default Web Site binding is set to *:80:* meaning that all requests to that server will land at that site. Valid Bindings: IP Field Port Field Host Header Result * 80 * All requests to this server’s IP address will access this site. * 81 * All requests to this server’s IP address with :81 will access this site 80 * All requests to this server’s IP address will access this site* * 80 All requests to this URL will access this site * 80 All requests to this URL will access this site For option where you utilize IP address as the “unique” point for access, you will need to disable HTTP.sys default behavior of listening on all IP addresses configured on your server. For example, if you have and configured as IP addresses on the same server the default behavior “out of the box” is to listen on port 80 no matter if you do the binding in the IIS Manager. To change this behavior, you will need to configure HTTP.sys’s IPListenList (future blog I guess as there is no MS documentation on the topic) to only listen on a specific address. This is done via the registry or NetSH depending on what you are most comfortable with. image Figure 1: Default setting for IPListen (blank equals *:80:*) In short, if you plan to utilize a Website then know what your bindings will be and where your application will live in production. If a shared server, you can bet you will need a Host Header or a unique IP address so think ahead and get ‘er going. Websites versus Application Pools There are so many reasons that Websites & Application Pools are confused that I don’t have enough time to do a post on it. I’m not going to try and solve the debate here, but instead, I’m going to try and educate you on what the fundamental difference between the two are. In discussions with IT Pro’s & Developers, rarely will you have any of them “admit” they know what each is and when to utilize one or the other but my guess is that over 70% of them don’t know. Thus, I hope for readers out there who used their decision engine (nice plug, ay?) to find this reading will enjoy learning this topic and we can together reduce this 70% to a much lower number… Websites: Container of physical and virtual directories It really is simple. A website is nothing more than a container of physical and virtual directories that have a unique “Server Binding” for clients to access the content. The default container in IIS, for years, has been %systemdrive%\inetpub\wwwroot unless you are doing a unattended install in IIS 6.0 which allowed you to put the files where ever you choose. Path + Server Binding = Website … It really is easy. NOTE: Their is a serious obmission completely on purpose here. As you can see, Websites have nothing to do with memory, server processes, bitness, or performance. They simply are a path + binding. When to choose a “Website” With that understanding, you can now make an educated guess as to how to answer the question of whether you should create a new Website or use an existing one. However, I will make sure to share it in case you missed it - “You decide whether to create a new Website based on whether you would like to have a unique binding for your Website or if you want to use an existing one.” The path isn’t important in this equation as I can create a 1000 Websites all pointing to exactly the same path and there is absolutely no problems with doing this (of course, why in the heck would you do this is a great question). The key decision here is that any physical or virtual directory will always use the bindings of the Website so ensure that you understand this. When to choose directories? If there is a website which is already running and utilizing a binding that you would prefer to use then you should select this option. This allows you to utilize the resources of the parent site, if interested, as the server (e.g. IIS) will handle any requests over the same connection(s). For example, any physical or virtual directory in the IIS path is still considered “/” to the server as it builds out the URI because the bindings are already mapped at the site level. This means that URLs can be re-written to go various different places within the folder hiearchy over the the same connection since the binding is the “same”… If you choose to put your Web application in its own Website then you will have to use the HTTP 302 redirection capability (exposed via Server.Transfer or other methods) to push the request elsewhere. So, as you can see, thinking ahead of time about whether you are building a Website for your application or whether it is a child directory (physical or virtual) is an important piece of information to have locked early, early on! Application Pools: Container of applications The very nature of application pools is to do the obvious, contain a single or multiple applications. The introduction of application pools in IIS 6.0 caused some head scratching but in today’s world where IIS 6.0 is very engrain in enterprises and the Web leads to less scratching. However, again, development teams often make mistakes by not “thinking” about application pools and there impact on their new applications they are building. Hence the reason we will chat about this some more today… First Concept… Windows Process = Application Pools *not* Windows Process = Website Second Concept… Process Management = Application Pools *not* Process Management = Website When to create a new Application? By default, IIS 6.0 or IIS 7.0 must have a single application to run. If the root application (/) is deleted or corrupted then IIS will fail, as in, not serve your application. Both products ship with a default application which is assigned to the Default App Pool. I should not this is only if no other Microsoft features have been installed and instead we have the basic Web server installed. image As you can see, there is also a Classic .NET AppPool but no applications are currently bound to it. In IIS 7.0, any managed code application can choose to utilize the Integrated Pipeline or to use the classic ASP.NET pipeline which is present in IIS 6.0. By default, you as a developer of a Web application can choose to simply inherit the settings of the parent Application Pool (/) and choose to not create your own. This is absolutely fine. So you might ask, what do I get from choosing this route? I’m glad you asked because it is important to know that you get all the settings of the parent application pool which in this case is the DefaultAppPool. image These settings include the following: Setting Purpose Recycling Settings How often the App will be recycled such as by time intervals, memory usage, etc. Process Security Who is the identity that the W3WP process will run as Pipeline Type (IIS 7.0 Only) Whether to use the integrated pipeline, classic pipeline, or no Managed code at all Bitness Whether the process runs in native 64-bit or uses a 32-process (64-bit OS only) As you can see, you need make some important decisions early on or you are going to change a lot during the development process. When to create a new Application Pool? Well, it sounds like I’m best to create a new application pool for all my Web applications. I would say you’ve been suckered and convince that this is the best without all the facts. The fact is that creating an application pool includes understanding better your strategies for security such as do you run Network Service, a Domain Service Account, etc. that starts to complicate things very quickly. The one thing that many manage code developers often love to take advantage of is the caching capabilities of processes and manage code. Each time you create a application, bind it to its own unique application pool then you are limiting your ability to share cache with other .NET applications running on the same box. For example, if you have the Microsoft Enterprise Library in use all throughout your Web applications then you can often utilize caching to improve performance. As soon as you break these out into different process boundaries (e.g. App Pools) then you no longer have that benefit. There are a number of these types of examples listed above that drives the question – Do I use my own application pool or do use another one already running? I’m happy to get posed a question via comments or email regarding this topic and see what your situation is and make my suggestion :) Nonetheless, be careful in your planning when utilizing your own Application Pools and share resources where possible is my guidance. There are absolutely situations where one might choose to always go hard line with creation of app pools for every new Web development project. I just caution you and say, “Not so fast my friend… “ Summary In today’s post, I went for a “blast from the past” theme as to just feel the power between… well let’s leave it there. I spent several years focusing on our IIS product and a great deal more focusing on helping customers struggle with the product with our lest than stellar performance of training on IIS. You now have so for that I say woohoo! So today was fun… To summarize, I hope that I gave you non-IIS geeks a bit of understanding in one of the fundamental foundations of ‘Getting Started with IIS’ concepts. I often see many folks fail to understand when to use a Website, or a virtual directory and my aim with this post is to give your brain a bit of a quiz to help you figure out which “way” you want to go when you are developing applications based on IIS. In yesterday’s meeting, that is all I did was give my feature team a lot to think about and then I will let them choose what avenue they would like to go… Thanks, Param

Difference between Application Pool and WebSites in IIS

An application pool is a collection of websites running as a single process, with a single identity. You can have multiple websites running under a single application pool, but you can not have a single website running in multiple application pools. Splitting websites over application pools allows more rigid security between the sites, as well as preventing one website from crashing if another one does. The benefit of combining multiple sites into a single application pool is to either share resources, or to leave a smaller server footprint on the server.
You can find more information here:

Consolidate SharePoint 2010 Application Pools

Working with a customer, we saw that they had 20 or so web application pools, each with 1 web application each. The software boundary for SharePoint 2010 is to have no more than 10 web application pools. Because each application pool can have multiple web applications assigned to it, and easy fix is to just move the web applications to new application pools. I have seen several blogs and even forum posts that indicate you can just change the application pool in IIS. This is not sufficient, because SharePoint maintains a reference to the application pool and the managed account assigned to the application pool. If you change it in IIS without telling SharePoint, you can cause orphaned configuration data which may prevent you from being able to apply updates or upgrades. There’s no UI to change the application pool in Central Administration, the only way to change this is through the object model. Luckily this is pretty simple to do using PowerShell. $webService = [Microsoft.SharePoint.Administration.SPWebService]::ContentService $pool = $webService.ApplicationPools["SharePoint - AuthTest80"] $app = Get-SPWebApplication http://teamsapp $app.ApplicationPool = $pool $app.Update() $app.ProvisionGlobally() The trick here is to use the ProvisionGlobally method. If you leave this out, it changes the setting in SharePoint’s configuration data, but does not make a change in IIS. ProvisionGlobally will make the change in IIS as well, and will do this for every server in your farm… another huge benefit for doing this through the object model instead of manually updating the setting in IIS for every server in your farm!

Identify the IIS App Pools containing SharePoint Web and Service Applications

This PowerShell function identifies the IIS App Pools containing SharePoint (2010 or 2013) Web and Service Applications. IIS and SharePoint stored Application Pool Names are included which comes in handy for identifying IIS App Pools named with GUIDs containing Service Apps.

Application Pools

The element contains configuration settings for all application pools running on your Internet Information Services (IIS) 7 or later server. An application pool defines a group of one or more worker processes, configured with common settings that serve requests to one or more applications that are assigned to that application pool. Because application pools allow a set of Web applications to share one or more similarly configured worker processes, they provide a convenient way to isolate a set of Web applications from other Web applications on the server computer. Process boundaries separate each worker process; therefore, application problems in one application pool do not affect Web sites or applications in other application pools. Application pools significantly increase both the reliability and manageability of your Web infrastructure. You can choose to use the default application pool provided by IIS on install, or you can create your own application pool. You can run as many application pools on your IIS 7 and later server as you need, though this can affect server performance. Application pools can contain one or more worker processes. Each worker process represents work being done for a Web site, Web application, or Web service. You can create a Web garden by enabling multiple worker processes to run in a single application pool. In IIS 7 and later, each application pool uses one of two .NET integration modes for running ASP.NET applications: Integrated or Classic. The .NET integration mode defined for the application pool determines how IIS processes an incoming request to the sites, applications and Web services that run in that application pool. •Integrated mode allows IIS to process requests in the application pool by using the IIS 7 and later integrated pipeline. This allows ASP.NET modules to participate in IIS request processing regardless of the type of resource requested. Using integrated mode makes available features of the ASP.NET 2.0 request pipeline available to requests for static content, as well as ASP, PHP and other content types. By default, IIS 7 and later application pools run in this mode. •Classic mode uses the IIS 6.0 processing pipeline for hosting ASP.NET applications. In this mode, requests are processed initially through IIS 7 and later modules, and ASP.NET requests are further processed by the aspnet_isapi.dll. The ASP.NET processing pipeline is separate from the IIS 7 and later processing pipeline, and the ASP.NET request processing pipeline features are not available to other resource types. This also means that an ASP.NET request must pass through authentication and authorization modules in both process models. While this is not as efficient as Integrated mode, it does allow you to run applications developed using ASP.NET version 1.1 on an IIS 7 and later server without modifying the application to run in Integrated mode. New in IIS 7.5 and later Starting in IIS 7.5, you can configure an application to start automatically by using the managedRuntimeLoader, CLRConfigFile, and startMode attributes of the element. These attributes configure, respectively, the name of the managed DLL that provides runtime loading for your application, the common language runtime configuration file for the application, and the startup type for the application. Also new in IIS 7.5 and later is a new ApplicationPoolIdentity type for the identityType attribute of the element. This new identity type is now the default process identity for applications, and makes it possible to set the security for your content areas to allow access for a specific application pool. To do so, you would set your security using the name of an application pool by using syntax like "IIS AppPool\DefaultAppPool." This identity is created dynamically, thereby dramatically reducing the surface attack area of your server. Compatibility Version Notes IIS 8.5 The element was not modified in IIS 8.5. IIS 8.0 The element was not modified in IIS 8.0. IIS 7.5 The element of the element was updated in IIS 7.5 to include attributes that allow you to preload applications by using the managedRuntimeLoader, CLRConfigFile, and startMode attributes, and to run applications using the new ApplicationPoolIdentity. IIS 7.0 The element was introduced in IIS 7.0. IIS 6.0 The element replaces the IIS 6.0 IIsApplicationPools metabase object. Setup The collection is included in the default installation of IIS 7 and later. How To How to create a new application pool 1.Open Internet Information Services (IIS) Manager: •If you are using Windows Server 2012 or Windows Server 2012 R2: •On the taskbar, click Server Manager, click Tools, and then click Internet Information Services (IIS) Manager. •If you are using Windows 8 or Windows 8.1: •Hold down the Windows key, press the letter X, and then click Control Panel. •Click Administrative Tools, and then double-click Internet Information Services (IIS) Manager. •If you are using Windows Server 2008 or Windows Server 2008 R2: •On the taskbar, click Start, point to Administrative Tools, and then click Internet Information Services (IIS) Manager. •If you are using Windows Vista or Windows 7: •On the taskbar, click Start, and then click Control Panel. •Double-click Administrative Tools, and then double-click Internet Information Services (IIS) Manager. 2.In the Connections pane, expand the server name, and then click Application Pools. 3.In the Actions pane, click Add Application Pool.... 4.In the Add Application Pool dialog box, enter the name of the application pool in the Name: box, in the .NET Framework version: drop-down list select the .NET Framework version your site or application uses, in the Managed pipeline mode: drop-down list select Integrated or Classic, and then click OK. How to configure the application pool for an existing site or application 1.In the Connections pane, expand Sites, and then navigate to the Web site or application you want to add to the application pool. 2.In the Actions pane, click Advanced Settings... 3.In the General section of the Advanced Settings dialog box, click the Application Pool entry, and then click the ellipses button. 4.In the Select Application Pool dialog box, select the application pool from the Application pool: drop-down box, click OK, and then click OK again.

Finding the application pool account for a web application

I get a lot of questions from people who read my post on Configuring claims and forms based authentication for use with a SQL provider in SharePoint 2010 about how to find the application pool account for a certain web application. ◦The first thing you have to do is to find out what application pool is being used for you web application. In order to find this out we need to open up IIS (Internet Information Services) Manager. Click on “Start” – “Administrative Tools” – “Internet Information Services (IIS) Manager”. OpenIIS ◦Open up the drop down on the left and look for your web application in the “Sites” list. ◦Select the web application you want to find the application pool account for and click “Basic Settings’ in the panel on the right. BasicSettings BasicSettings2 In my case the application pool for my web application is “SharePoint – Web Apps”. ◦Now we have to find the application pool account and we have two ways in which we can do this. ◦We can find the application pool account in IIS by selecting “Application Pools” in the left panel. This will show the list of application pools in the middle. ◦Find the application pool that you found in the basic settings of your web application. Right of the name of the application pool the application pool account is displayed. If you are looking for the application pool account of your Central Administration web application you can simply look to the right of the “SharePoint Central Administration v4” application pool. In my case the application pool account of my Central Administration web application is “SOLUTIONS\spfarm”. ApplicationPools ◦If you know the application pool of your web application you can also find the application pool account from the SharePoint user interface. ◦In order to do this open up Central Administration and click on “Security”. ◦Now click on “Configure service accounts”. CASecurity ◦In the drop down select the application pool of your web application. This will display the application pool account in the text box. ConfigureServiceAccounts ◦If you are looking for the application pool account of your Central Administration web application you have to select “Farm Account” in the drop down box. The farm account is also the application pool account of your Central Administration web application and selecting farm account in the drop down will make the account show up in the text box. That’s all there is to it!

Write text log entry inside 14 Hive logs from SharePoint Timer job

This blog will demonstrate, how to create/write log file in text format inside 14\Hive folder. Idea behind this is to register error in text format inside 14\Hive folder. Lets consider this problem statement : Get list of Organization Profiles where value of property "OrganizationMotto" is null or Empty. "OrganizationMotto" is custom property, created through UI. To create custom property follow this steps. 1. Go to User profile application 2. Under Organizations --> Click Manage Organizations Properties 3. From Top bar click "New Property" 4.Specify Name and Display Name. Name cannot be changed but display name can be. So make sure you giving correct name. 5.Specify Type and length. 6.Hit Save. There are other settings you can do while create property. Such as replicating property to user information list , showing in profile page etc. After saving you can see custom property in Manage Organizations properties section. Now you need to create SharePoint , which will be deployed to the share point webapplication level. Below article showcase step by step guide to create timer job. Kindly review that if audience is new to SharePoint. Now after creating timer job, you can find Execute method which will contain core body for logic. Below is the code reference public override void Execute(Guid targetInstanceId) { SPWebApplication webApplication = this.Parent as SPWebApplication; using (var rootSite = new SPSite(webApplication.Sites[0].Url)) { // Get reference for Service Context SPServiceContext serviceContext = SPServiceContext.GetContext(spSite); //Get reference of OrganizationProfileManager which contains all Organization Profile objects OrganizationProfileManager orgProfileManager = new OrganizationProfileManager(serviceContext); //Now we want to create file under 14\Hive, so this path can be easily retrieved using GetGenericSteupPath method of SPSUtility. This will get path Program files\Common Files\Microsoft Shared\Web Service Extensions\14\Logs string logsPath = SPUtility.GetGenericSetupPath("Logs"); string filepath = logsPath + @"\OrganizationProfileLog.txt"; // Get object of StreamWriter to create and write file StreamWriter streamWriter = File.CreateText(filepath); string orgDisplayName = string.Empty; //iterate each object of OrganizationProfile to check OrganizationMotto value. foreach (OrganizationProfile orgProfile in orgProfileManager) { try { orgDisplayName = orgProfile.DisplayName; if (orgProfile["OrganizationMotto"].Value == null) { //WriteLine method will write string to the new line. streamWriter.WriteLine("OrganizationMotto property value of "+ orgDisplayName +" is empty"); } else { if (!string.IsNullOrEmpty(orgProfile["OrganizationMotto"].Value.ToString())) { streamWriter.WriteLine("OrganizationMotto property value of " + orgDisplayName + " is empty"); } } } catch (Exception exception) { streamWriter.WriteLine(exception.Message); } } streamWriter.Flush(); streamWriter.Close(); } } After adding this code. Deploy timer job and execute it. Then go to 14\Logs folder and search for OrganizationProfileLog.txt file. Open this file to see if there is any issue registered .

Google+ Followers