Exchange CAS IIS Log Maintenance

One of the first things a young Exchange admin will learn is that there is no built in IIS log maintenance capability for clearing out old log files baked in to Exchange. Hopefully this lesson comes when SCOM starts throwing drive space alerts, and not when the C: drive fills up and craters the server :/. To mitigate this risk, most admins create a simple script for regularly performing IIS log maintenance by deleting CAS log files older than a certain date. Here are a couple of scripts you can use, depending on your environment.

Properly scheduled IIS log maintenance will help keep your C: drive clean (what was happening on 6/14!?)
15 items so the script is working as expected. Based on file size you may want to shorten the retention window to 7 days in your environment.

For info on how to create the Scheduled Task to run this script daily, look here

If you just have one CAS (or one server with the Client Access Service in the case of Exchange 2016), this one-liner will get the job done:

#################################################################
# Exchange CAS IIS Log Maintenance                              #
# This script deletes IIS log files older than 14 days from the #
# local Exchange CAS server					#
#								#
# Created by Eric Kukkuck 3/3/2015				#
#################################################################

Get-ChildItem –Path  “C:\inetpub\logs\LogFiles\W3SVC1” | Where-Object {$_.CreationTime –lt (Get-Date).AddDays(-14)} | Remove-Item

If you have multiple CASes and an Admin server, this script will pull a list of CAS servers from a text file and clear the old log files remotely. It will also remove log files from the first two additional log file directories, if you happen to have three directories in play (as I did at one point, the default website, a secondary website for basic auth ActiveSync devices, and a third website for the AV solution)

#################################################################
# Exchange CAS IIS Log Maintenance                              #
# This script deletes IIS log files older than 14 days from all #
# the Exchange CAS servers listed in the        		#
# D:\ABZ\Scripts\ExchangeCASServers.txt file			#
#								#
# Created by Eric Kukkuck 3/3/2015				#
#################################################################

$servers = get-content "D:\ABZ\Scripts\ExchangeCASServers.txt"

foreach ($server in $servers)
{
	Get-ChildItem –Path  “\\$server\c$\inetpub\logs\LogFiles\W3SVC1”,"\\$server\c$\inetpub\logs\LogFiles\W3SVC2","\\$server\c$\inetpub\logs\LogFiles\W3SVC3",“\\$server\c$\inetpub\logs\LogFiles\W3SVC4” | Where-Object {$_.CreationTime –lt (Get-Date).AddDays(-14)} | Remove-Item
}

-Eric

IIS Log Parser Script for Finding Two Items Per Line

Troubleshooting Exchange connectivity issues can often be a chore, especially if you’re trying to go line by line in the IIS logs. The close formatting in those text files makes for a blurry and mind-numbing experience, not to mention how easy it is to miss what you’re actually looking for. To make that experience easier (and to offload the hard work to PowerShell) I’ve put together the following IIS log parser script* that will allow you to search IIS logs remotely on multiple servers for lines that contain two different items and export that data to a CSV. For example, if your helpdesk reports to you that some people are unable to access their mailbox from their ActiveSync device, you may want to start your investigation by searching the IIS logs on multiple CASes for lines that contain both “ActiveSync” and “401”. This script will do just that!

*I’d like to give credit to who created the original script I edited to make this one, but I cannot seem to find it online 🙁

-Eric

#########################################################
# Search Multiple IIS Logs for Multiple Items Per Line 	#
# Created By Eric Kukkuck   04/16/2014			#
#########################################################

# Edit the variables below to meet your needs #

$Path = "\\SERVER1\c$\inetpub\logs\LogFiles\W3SVC1","\\SERVER2\c$\inetpub\logs\LogFiles\W3SVC1"
$PathArray = @()
$ResultsLog = "C:\Temp\IISSearchResults.csv"
$Variable1 = "ActiveSync"
$Variable2 = "401"

# The meat and potatoes #

if (Test-Path $ResultsLog -PathType Leaf) { 
write-host "Delete the current log file and try again " 
exit
}
# This code snippet gets all the files in $Path that end in “.log”.
Get-ChildItem $Path -Recurse -Filter “*.log” |
Where-Object { $_.Attributes -ne “Directory”} |
ForEach-Object {gc $_.FullName | % { if($_ -match "($Variable1.*$Variable2)") {
$_ | add-content -path $ResultsLog }
    }
}

O365, On-prem Exchange, and SPF Records

One of the first things you should update before integrating Exchange Online or EOP into your mail flow is your organization’s SPF record. If you plan to use EOP as your perimeter gateway you may think that the O365 IPs are all you need, and you can get away with an SPF record that looks something like this:

v=spf1 include:spf.protection.outlook.com -all

Unfortunately, that isn’t the case. When Exchange Online shuffles emails around internally between tenants, any message from your on-prem environment will still need to be validated against your SPF record. What’s even more interesting, is if you have mailboxes in Exchange Online, Exchange on-prem, and decide to put a perimeter gateway like Proofpoint in front of EOP, then you get to have three sets of IPS (Exchange Online’s, your on-prem environment’s, and your Proofpoint gateway’s) in your SPF record!

-Eric

Scheduled Tasks and the Exchange Management Shell (EMS)

As an Exchange admin, one of my favorite features of a Windows OS is the trusty Task Scheduler. This simple capability allows me to generate reports, perform cleanup tasks, and ensure things in my environment stay configured as I expect. The one issue that I sometimes run into however is how to have the Task Scheduler run a task within the Exchange Management Shell (EMS;, so to make it easy I’m going to share that info with you.

As you can see in the screenshot below, we’ve got all of our fields populated.

The Action field is self-explanatory, but the data in the other three goes as follows:

Program/script:

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe

Add arguments (optional):

-version 2.0 -NonInteractive -WindowStyle Hidden -command “. ‘C:\Program Files\Microsoft\Exchange Server\V14\bin\RemoteExchange.ps1’; Connect-ExchangeServer -auto; .\ScriptName.ps1

Start in (optional):

D:\ABZ\Scheduled Tasks

-Eric

Recovering from Exchange Database Dismount due to Transaction Log Accumulation

As we all know, Microsoft Exchange uses transaction logs to commit data to the mailbox databases. One of the first critical issues we usually run into during our messaging career is a database dismount due to running out of capacity for the transaction logs. This is usually caused by at least one failed backup, and the emergency fix is to clear some of the oldest transaction logs from the drive so you can remount the database. Once the DB is back up (in a DAG you’ll have to manually delete the same transaction logs from each copy to restore HA), you may want to quickly truncate all of the remaining possible logs to maximize the time you have before the problem happens again. To do that, Microsoft has created a tool for testing VSS capabilities on an Exchange server (versions 2010, 2013, and 2016), but it has the additional benefit of kicking off the log truncation process while it runs. You can find the tool here:

https://gallery.technet.microsoft.com/office/VSSTesterps1-script-4ed07243

A few things to note before using the VSSTester.ps1 script:

  • You would run this tool on the server with the active copy of the database
  • If this is the server the backup failed on, you may need to reboot (you could restart the Information Store service instead but IMHO if you’re going to move all the DBs off that server to bounce the service might as well reboot to clear out any other gremlins). For the tool to complete successfully all of the VSS Writers and such need to be in the correct state
  • Once this has successfully cleared your logs on the active copy the passive copies should truncate their logs as expected.
  • If you’re having problems running the tool on one server you can always make another copy of the DB active on another server and run it there

-Eric