Sep 142016
 
 September 14, 2016  Posted by at 20:09 Server 2012R2 No Responses »

Today, I began to configure Work Folders in a two-node failover cluster for my workplace.  The implementation that we chose will use our internal Certificate Authority as syncing will only occur to computers when they are connected to our LAN internally.  This is good news for me as it means I do not need (at least for now) to configure a WAP or ADFS server.

To start with, I fired up a lab in Hyper-v that was as close as possible to the actual production environment, using Windows Server 2012 R2 as a target server in order to present iSCSI storage to the cluster.

This is not like my usual step-by-step guides, more a documentation of the certificates procedure I used in order to get Work Folders functioning in my lab.  I found there was a lot of information on using self-signed certificates in a lab environment and also for a single Work Folders server – but not much on configuring certificates in a clustered capacity using an internal CA which requires a few different steps.  It also contains a couple of other related items I discovered on my voyage when using a failover cluster with Work Folders (such as CNAME in DNS).

So without further ado, let’s go….

On the Certificate Authority:

On our CA, I copied the Web Server template, gave it a meaningful name and allowed the private key to be exported.  This is an important step as I will be installing this same certificate on multiple servers:

1-private-key-export

I then gave Authenticated users the enrol permission:

3-enrol-permissions

…and ensured that ‘Supply in the request was selected’

2-supply-in-request

The Storage:

Having started the iSCSI initiator on the first sync server and configured the storage, I then took the storage offline.

Repeat this for the second node (and any other nodes that make up your cluster)

 

Certificate Request:

On the first Work Folders Sync Server, I used the following details in the certificate request:

h-certificate-request-details

The important items to note here are that the common name should be workfolders.your.domain and the same for the alternative DNS name.  In addition, you will need to add the name of the VCO (Virtual Cluster Object).  In my case, It was named  fs1 as shown in the next screenshot taken from the Failover Cluster Manager console:

p-cluster-role-name

 

Configure the SSL certificate binding:

On one of the Work Folders sync servers, perform the following…

To configure the SSL certificate binding you will need to know the thumbprint of the certificate.  I did this in Powershell using the following command:

Get-ChildItem -Path cert:\localmachine\my

a-get-the-thumbrint

And then in an Administrative command prompt – (Not PowerShell!) I typed the following:

netsh http add sslcert ipport=0.0.0.0:443 certhash=<Cert thumbprint> appid={CE66697B-3AA0-49D1-BDBD-A25C8359FD5D} certstorename=MY

Obviously replacing <Cert thumbrint> with your certificate thumbprint!)

Note, ensure you use the ipport of 0.0.0.0:443 as shown in the command line.

Export the Certificate:

On the same server that you just completed the certificate binding on, export the certificate…

j-export-the-certificate

On the other sync server cluster nodes:

  1. Import the certificate!
    m-import-the-certificate-on-other-nodes
  2. Configure the SSL certificate binding as per the instructions above.

Add a CNAME in DNS

Add a cname of workfolders that points to your cluster file server role name – ie the VCO name (Virtual Cluster Object) – In my case I had named it fs1

r-cname-dns-record

 

That was it – after this, Work Folders worked like a charm! Sweet!

May 272016
 
 May 27, 2016  Posted by at 13:42 Exchange, Powershell No Responses »

The other day I had to write a Powershell script that utilised some Exchange Powershell cmdlets. This script had to run as a scheduled task on a non-exchange server that did not have the Exchange Management tools installed, nor was this an option.

To do this I knew I had to import the Exchange cmdlets to the remote computer and then run the script.

Here is the command line that I used in task scheduler to achieve my goal:

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -command "$s = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri http://YourexchangeServer.yourDomain.com/PowerShell/ -Authentication Kerberos ; Import-PSSession $s; &'c:\Path\to\script\ExchangeScript.ps1'"

Apr 162016
 
 April 16, 2016  Posted by at 21:16 Configuration Manager 2012 No Responses »

I’ve implemented SCCM 2012 R2 at our corporate HQ and now I’ve started to deploy remote distribution points at our other offices. So there I am configuring the first one at our office in Germany and I get to the boundary groups page where the ‘Allow fallback source location for content’ is selected by default. My design does not include this feature for this particular DP and so I un-check it as per the screenshot below:
RemoteDP-Fallback1
When I get to the summary page, I have a read through (you do too, right?) to make sure I haven’t made a mistake and noticed that it said that ‘Allow fallback source location for content = Yes” !! I never said that!
RemoteDP-Fallback2

So I go back and sure enough it’s unchecked. Looks like the logic for the code is slightly wrong when it presents the informaion on the summary screen. If you check the box then the opposite is true on the summary!

So I left it unchecked and even though the summary says it’s a big yes, when I checked post-installation, it was in fact all OK…Phew! (See screenshot below)
RemoteDP-Fallback3

Right..one down, twenty to go….

Mar 112016
 
 March 11, 2016  Posted by at 20:50 Configuration Manager 2012 No Responses »

I needed to backup bookmarks from the Chrome web browser using USMT as part of an image refresh task I’ve recently implemented using SCCM 2012R2 + MDT Integration.

Searching the Internet (why re-invent the wheel eh?) only gave me a couple of results, and when trying the ‘solution’ I found that it did not work.

Here is the main post that I used as a reference: http://www.itninja.com/question/user-state-migration-tool-1

The reason it failed was that the detection rule path in migapp.xml (referred to in the above link) was failing. When I installed Chrome on my system, the registry key HKLM\SOFTWARE\Wow6432Node\Google\Chrome that is being detected did not exist:
usmt google reg

I shortened the path of the detection rule to: HKLM\SOFTWARE\Wow6432Node\Google which was the only path that existed in my test systems and that did the trick.

So all you need to do is modify migapp.xml…

This is the original, remove the two existing detection rules (highlighted in yellow):
USMTOriginal

…and replace them with the single new detection rule:
USMTModified

Feb 042016
 
 February 4, 2016  Posted by at 20:54 Powershell No Responses »

We have decided to try and use network locations (See screenshot below) instead of drive maps where practical at my work place for a number of reasons; ensuring minimal damage by Cryptolocker and running out of letters being two of the primary ones.

Nloc

I was trying to find a way of automating this in a way that will give me the greatest flexibility and naturally PowerShell once again came to the rescue.

I got most of the code from here and so cannot take credit for it – all I did was strip it of some of the validation (as I did not need this) and turn it into a function: This enables me to use it in a more versatile manner which meets my needs perfectly and I’ll be writing a new script that calls this function over the next couple of days.

You can find it on my Github here.

Nov 082015
 
 November 8, 2015  Posted by at 20:53 Tools No Responses »

I’m migrating data to a NetApps Filer SAN and as part of this I hit upon an issue whereby the Domain Administrator had been denied access by some rather unscrupulous staff members to various folders and files resulting in failed copy operations.

Unfortunately, due to the way the permissions were originally configured, I could not take ownership on the root directory to allow inheritance to do it’s magic.

I started to manually take ownership until I realised the extent of  work involved.  This job was going to take hours!

Err…no.  Enter into the ring something that I suddenly remembered reading about a few years ago but had never actually used.  It’s a great tool and even better, it’s built right into the Windows operating system: takeown
Typing in:

takeown /? 

gave me the help screen and from that I constructed and ran the following command:

 takeown /F \\Path\to\RootDir /R

The above command line gives ownership to the current user and uses recursion. Depending on who you are logged in as you may want to also use /A which gives ownership to the Administrators group instead of the current logged in user.  I ran this and 4 hours of work was completed in about a minute.  Perfect. Back to migrating that data….

Sep 262015
 
 September 26, 2015  Posted by at 19:51 Group Policy, Project: 2008R2 to 2012R2, Server 2012R2 No Responses »

I’m currently upgrading domain controllers from 2008R2 to 2012R2 in various countries in my workplace.  As I was project planning our UK and Germany upgrade I noticed that the PDC on our UK DC has it’s NTP time source set manually.  As part of my project I will be moving the PDC FSMO role from it’s existing DC to another and then move it once again at a later stage in the project!

Naturally I didn’t want to set the NTP time source manually each time so here’s how I did it via GPO so I don’t have to worry about it:

The first thing I did was to create a GPO filter that would target only my PDC:

1.
In the Group policy editor, select the WMI Filters node, right-click it and select New:

Where to set wmi filter

2.
Give the filter a meaningful name then click the Add button:

Click Add on filter

3.
Type the query to target the PDC emulator as shown in the screenshot below.  DomainRole = 5 targets only the PDC.  I found this information here where you can also find information on how to target other roles if need be.

The wmi filter

4.
When I clicked OK on my 2012R2 DC I received the following error:
Error message - ignore

On investigation I discovered that it can be safely ignored as it seems to be a bug.  There are a few posts out there saying to enclose the where clause in parenthesis or quotes but this never worked.  At any rate, ignoring the message worked for me.  I tried transferring the PDC role a couple of times and the GPO switched accordingly despite the message so all’s good.

5.
Click Save on your newly created filter:
Click save

6.
Now for the GPO.  Create a new GPO and navigate to the following:
Computer Configuration\Administrative Templates\System\Windows Time Service\Time Providers

7.
Select ‘Configure Windows NTP Client’ and enter the name or IP address of your NTP server followed by ,0x1 (Incidentally, if you want to know more about the flags, check out this excellent post.)

If you wish to add more than one ntp server then note that they are space separated eg: (Note the space between the 0x1 and the 1)
0.pool.ntp.org,0x01 1.pool.ntp.org,0x01
Configure NTP Client

8.
Enable this too while you are there…
enable client

9.
And this one…
Enable NTP Server

9.
Now all you need to do is select the WMI filter you created earlier in your GPO, and link the GPO to your Domain Controllers OU:
Select your filter on the GPO

10.
When you flip the PDC FSMO role you will see the GPO applied to the new PDC when the DC’s refresh their GPO policy (every 5 minutes by default)
GPO Applied to PDC

That’s it – now when I move the PDC FSMO role throughout my UK\Germany project I have one less thing to worry about!

Jun 262015
 
 June 26, 2015  Posted by at 12:28 Powershell 3 Responses »

This post is for my reference although if you stumble upon it and find it useful then cool!

I have configured AppLocker to run in ‘Audit’ mode and wanted a quick method of seeing what would be blocked without having to log on to individual computers and checking the event logs.

That way I can pro-actively monitor computers and build the white-list rules before I turn AppLocker on to ‘Enforce’ mode.

I’ve seen there are a number of AppLocker cmdlets that I’ll be exploring later, but for now, here’s my solution on GitHub.

Jun 042015
 
 June 4, 2015  Posted by at 12:41 Group Policy, Powershell No Responses »

Currently at my workplace, our GPO’s are backed up every now and then using a manual process via the GPMC.  I decided it was time to automate this process and I found out a few interesting discoveries which I thought I would share here.

Backing up all of the GPO’s is very easy in Powershell and it’s been documented a billion times all over the Internet:

Backup-GPO -Path C:\GPOBackups -All

Setting this up as a scheduled task completed the job nicely.

The tricky part was backing up the GPO Links.  In my environment there are a lot of GPO’s that are linked to a myriad of OU’s throughout our Organisation.  In the event of a disaster I want an easy method of seeing what these links were and an automated method of not only restoring the GPO’s but also re-creating the links.  What follows is the first part of my journey…

I went with the cmdlet Get-GPOReport for this.  First of all I ran it and generated an HTML report to see what it contained:

Get-GPOReport -GUID 42917962-6bfd-4d17-ade0-bfe411245bef -Path C:\GPOReport.html -ReportType html

The information presented included the GPO links – perfect:

htmlLinks

The next  step was to script it and set it up as a scheduled task that would run alongside my backup GPO task.  My criteria was to create a separate report for each GPO and name the report after the name of the GPO along with the date that the report was run.  I also decided to export the report as XML as this would give me greater flexibility later on and help me achieve my ultimate goal:

get-gpo -all | % $_ {Get-GPOReport -Guid $_.id -Path "c:\GPOBackups\Reports\$($_.displayname) $(get-date -Format "dd-MM-yy").xml" -ReportType xml}

I can now easily interrogate the xml for any info I need – for example, to see the GPO Links for a specified GPO:

$Path = 'c:\GPOBackups\reports'
$GPOXML = [xml](Get-Content "$path\Set Desktop Wallpaper 04-06-15.xml")
$GPOXML.GPO.LinksTo

Which displays the following info:

the Links

If I wanted the GUID I could further query the xml by adding the following:

[string]$GPOGUID = $GPOXML.gpo.Identifier.Identifier.InnerText
#Clean up the GUID...
$GPOGUID = $GPOGUID.Trim("{,}")

And If I wanted the GPO name:

$GPOName = $GPOXML.gpo.Name

You get the idea…

I now have all of the information I need and automated measures in place to write a script that would restore all or some of my GPO’s,  including the links,  in the event of a disaster. (But that’s a post for another day!)

If there’s a better \ simpler method then let me know as I’m still on my Powershell learning journey!