tag:blogger.com,1999:blog-33335051293753549222024-03-13T16:00:35.169-04:00OpenAFS, Kerberos, and Network Identity ManagerRandom thoughts, updates and comments about OpenAFS, Kerberos for Windows, Network Identity Manager and related topics.Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.comBlogger39125tag:blogger.com,1999:blog-3333505129375354922.post-47004601359871057432013-05-11T13:58:00.001-04:002013-05-11T13:58:16.390-04:00OpenAFS for Windows 1.7 IFS: Reaching maturity?Earlier this week OpenAFS for Windows 1.7.24 was released. My last post on the IFS was back in <a href="http://blog.secure-endpoints.com/2012/11/openafs-windows-ifs-thirteen-months.html" target="_blank">early November 2012</a> after the 1.7.18 release. In the last six months and 348 commits there have been a number of significant improvements.<br />
<h2>
I/O Processing</h2>
The November blog entry finished with an enumeration of "to do" items that were necessary but unlikely to be implemented in the short term. Contrary to what was written, a new I/O processing pathway was implemented and made available in 1.7.22. The primary benefit of the new I/O pathway implementation is sustained throughput during file system store operations. The prior implementation would stall due to lost races between the afs redirector and afsd_service fighting for ownership over file extents. In addition to throughput improvements, the new I/O processing pathway permits applications to store data to the file server and bypass both the Windows System Cache and the AFSCache file by specifying the <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/cc644950%28v=vs.85%29.aspx" target="_blank">FILE_FLAG_NO_BUFFERING</a> flag when opening the file with <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa363858%28v=vs.85%29.aspx" target="_blank">CreateFile</a>.<br />
<br />
In the last six months a number of bugs were corrected that could result in data corruption when a mixture of file write's were issued to the same file with and without FILE_FLAG_NO_BUFFERING. One common scenario in which data corruption has been observed is when downloading files via Firefox or Internet Explorer. In order to make the most efficient use of clock time, the browser begins downloading a file from the web server as soon as the user selects the url. At this point the browser does not know where to write the data so it stores it in process memory while the user is presented a File Save ... dialog. Once the browser knows where the data should be written it creates the file in "no buffering" mode and instructs that the all of the data cached in memory be written at once to disk. The file is then closed and re-opened with normal buffering behavior. The data corruption would occur when the Windows File Cache received a request to store data in the middle of a 4K page but did not recognize that it must first read the prior contents of the file into memory. The error was due to a failure to set the ValidDataLength on the file during non-cached (non-buffered) I/O write operations.<br />
<br />
<h2>
Reparse Points and Symlinks</h2>
The March entry entitled <a href="http://blog.secure-endpoints.com/2013/03/symbolic-links-on-windows.html" target="_blank">Symbolic Links on Windows</a> described in significant detail the challenges of working with Symlinks on Windows. Over the last six months the management of Symbolic Links via the AFS redirector has been altered. Instead of representing AFS symlinks with the Microsoft assigned reparse point tag value for OpenAFS, the AFS redirector now uses the reparse point tag used to represent NTFS Symbolic Links. The benefits of this are many:<br />
<ul>
<li>The Win32 <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa363866%28v=vs.85%29.aspx" target="_blank">CreateSymbolicLink</a> api can be used to create AFS symbolic links.</li>
<li>Applications that are NTFS Symbolic Link aware recognize AFS symbolic links without modification.</li>
<ul>
<li><a href="http://jpsoft.com/" target="_blank">JP Software's Take Command</a></li>
<li>Microsoft's<a href="http://technet.microsoft.com/en-us/library/cc733145%28v=ws.10%29.aspx" target="_blank"> robocopy</a></li>
<li>Microsoft's <a href="http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx" target="_blank">Power Shell</a></li>
<li><a href="http://www.cygwin.org/" target="_blank">Cygwin</a></li>
<li><a href="http://schinagl.priv.at/nt/hardlinkshellext/hardlinkshellext.html" target="_blank">Link Shell Extension</a></li>
</ul>
</ul>
<h3>
Reparse Points to Files</h3>
There are many applications that are not reparse point aware. As described in Microsoft's <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa365682%28v=vs.85%29.aspx" target="_blank">Symbolic Link Effects on File System Functions</a> if a directory entry's attributes include the FILE_ATTRIBUTE_REPARSE_POINT flag the attributes, timestamps and size refer to the reparse point object and not the target of the reparse point even though a normal CreateFile request will open the target object. Applying the reparse point size to the target object's stream is likely to result in an incorrect end-of-file determination. <br />
<br />
What is most unfortunate is that all versions of .NET and all versions of Java through 1.6 ignore the FILE_ATTRIBUTE_REPARSE_POINT flag. Of course, from the perspective of application developers that use .NET and Java, the problem is not Microsoft's to solve but a flaw in AFS. It is viewed as a flaw in AFS because the OpenAFS AFS SMB gateway didn't support reparse points and it did not expose symlinks to the applications. As a result .NET and Java simply worked.<br />
<br />
In 1.7.24 a new registry option has been provided that disable the reporting of <i>Symbolic Links to Files</i> as reparse points. The 0th bit of the <b>ReparsePointPolicy</b> value when set activates this behavior. When this policy is activated, directory entries for symbolic links do not contain the FILE_ATTRIBUTE_REPARSE_POINT flag and their timestamps and file size are that of the target file.<br />
<br />
<h2>
Other improvements</h2>
There are have a broad range of application compatibility improvements to the network provider interfaces, optimizations of the garbage collection operations, and compatibility with IBM AFS 3.6 file servers for those that still use them, and dozens of other small tweaks.<br />
<br />
The Summer months will be spent on Windows 8.1 support and a rewrite of the Authentication Group and Process tracking. The AuthGroup changes are desired so that Reparse Point Policies can be applied at run time to independent groups of applications.<br />
<br />
<br />
<h2>
Credits</h2>
The OpenAFS for Windows client is the product of <a href="http://www.your-file-system.com/" target="_blank">Your File System, Inc.</a>, <a href="http://www.kerneldrivers.com/" target="_blank">Kernel Drivers, LLC</a>, and <a href="https://www.secure-endpoints.com/" target="_blank">Secure Endpoints, Inc</a>.
To support the development of the OpenAFS for Windows client, please
purchase support contracts or make donations. The recommended donation
is $20 per client installation per year.<br />
<br />Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-3668595442882734312013-03-25T01:32:00.002-04:002013-03-25T01:34:13.087-04:00IOZone Performance Measurements of OpenAFS<span style="font-family: inherit;">The I/O processing pathways were rewritten for the OpenAFS 1.7.22 release. One industry standard method of measuring I/O performance in a file system independent manner is the iozone benchmark developed and maintained by Don Capps of NetApp.</span><br />
<br />
<span style="font-family: inherit;"><a class="moz-txt-link-freetext" href="http://www.iozone.org/">http://www.iozone.org/</a> </span><br />
<br />
<span style="font-family: inherit;">This blog post will compare the iozone results for OpenAFS 1.5.75 which uses the SMB to AFS gateway service and OpenAFS 1.7.23 which uses the new AFS redirector. </span><br />
<br />
<span style="font-family: inherit;">The test environment includes a Lenovo Thinkpad W701ds workstation running Win7-64 as the client system. 8GB ram, dual Core i7 x920 2.00GHz processors (8
cores total), Windows Experience ratings: </span><br />
<ul>
<li><span style="font-family: inherit;">Processor; 7.2 </span></li>
<li><span style="font-family: inherit;">Memory: 7.4 </span></li>
<li><span style="font-family: inherit;">Graphics: 5.8 </span></li>
<li><span style="font-family: inherit;">Gaming: 6.5 </span></li>
<li><span style="font-family: inherit;">Disk: 5.9 </span></li>
</ul>
<span style="font-family: inherit;">The connection to the file server is a 1Gbit wired network through a
10Gbit switch.
The file server is OSX 10.6.8 Server running on a 2010 Mini Server using iSCSI
attached storage sharing a single 1Gbit network interface. The OpenAFS file server is version 1.6.2 using Demand Attach.
The AFS cache manager configuration includes: </span><br />
<ul>
<li><span style="font-family: inherit;">BlockSize 1 (4KB) </span></li>
<li><span style="font-family: inherit;">CacheSize 0x200000 (2GB) </span></li>
<li><span style="font-family: inherit;">ChunkSize 21 (2MB) </span></li>
<li><span style="font-family: inherit;">RxUdpBufSize 0xc00000 </span></li>
</ul>
<span style="font-family: inherit;"> All iozone tests were performed using "-Rac output.wks -g 2G". </span><br />
<br />
<br />
<h2>
Write Performance Comparisons<br />
</h2>
One of the big complaints with the OpenAFS SMB to AFS gateway is the poor write throughput. The iozone output for 1.7.75 demonstrates the limitations. Although the peak throughput for small files (about 1MB) reaches the 30,000 KBytes/second mark, the sustained throughput for larger files is below 16,000 KBytes/second.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-SzmqAY4v1x8/UU_T20A1kQI/AAAAAAAAAIY/MGd70qAMpag/s1600/iozone-openafs-1_5_7500-auth-smb_Page_01.png" style="margin-left: auto; margin-right: auto;"><img alt="" border="0" height="492" src="https://lh3.ggpht.com/-SzmqAY4v1x8/UU_T20A1kQI/AAAAAAAAAIY/MGd70qAMpag/s1600/iozone-openafs-1_5_7500-auth-smb_Page_01.png" title="OpenAFS 1.7.75 (SMB) Write Performance" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">OpenAFS 1.5.75 (SMB) Write Performance</td><td class="tr-caption" style="text-align: center;"></td><td class="tr-caption" style="text-align: center;"></td><td class="tr-caption" style="text-align: center;"></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
<br />
The 1.7.23 AFS Redirector does a much better job. The peak throughput increases with both the record size and the file size. Depending on the record size the throughput ranges from 30,000 KBytes/second to 65,000 KBytes/second. This is more than double the peak throughput of the SMB to AFS gateway.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/--3Ml7cbnsgc/UU_T60g6LOI/AAAAAAAAAJs/V89doa22CiA/s1600/iozone-openafs-1_7_2207-clear_Page_01.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="494" src="https://lh3.ggpht.com/--3Ml7cbnsgc/UU_T60g6LOI/AAAAAAAAAJs/V89doa22CiA/s1600/iozone-openafs-1_7_2207-clear_Page_01.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">OpenAFS 1.7.23 (RDR) Write Performance</td><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
<br />
<h2>
Read Performance Comparisons</h2>
1.5.75 read performance is quite inconsistent. Although there are peak throughput values above 200,000 KBytes/second the majority of record sizes are read at speeds in the 80,000 to 100,000 KBytes/second range. <br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-CvgfCSjDLno/UU_T2pWaISI/AAAAAAAAAIQ/RgPfyXCr1-Q/s1600/iozone-openafs-1_5_7500-auth-smb_Page_03.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="494" src="https://lh3.ggpht.com/-CvgfCSjDLno/UU_T2pWaISI/AAAAAAAAAIQ/RgPfyXCr1-Q/s1600/iozone-openafs-1_5_7500-auth-smb_Page_03.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">OpenAFS 1.5.75 (SMB) Read Performance</td></tr>
</tbody></table>
<br />
The 1.7.23 AFS Redirector is faster by a factor of ten. The majority of record sizes demonstrate read throughput in the 800,000 KBytes/second to 1,000,000 KBytes/second range.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-LcDagdtgVzI/UU_T7n7H4oI/AAAAAAAAAKA/45iD-bEgm_8/s1600/iozone-openafs-1_7_2207-clear_Page_03.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="494" src="https://lh3.ggpht.com/-LcDagdtgVzI/UU_T7n7H4oI/AAAAAAAAAKA/45iD-bEgm_8/s1600/iozone-openafs-1_7_2207-clear_Page_03.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">OpenAFS 1.7.23 (RDR) Read Performance</td></tr>
</tbody></table>
<h2>
Conclusions</h2>
One of the primary goals of converting OpenAFS from a SMB gateway to a legacy file system redirector was a significant improvement in I/O throughput. The improvements on the read pathway have certainly be obtained. The 2x improvement in the write path is good but there is certainly room for further improvement.<br />
<br />Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-5830470371419490742013-03-24T01:34:00.001-04:002013-04-29T21:12:39.613-04:00Symbolic Links on Windows Over the last month I have learned more about symlinks on Windows than I ever wanted to know. As many readers are aware, I am the lead developer of the OpenAFS client for Windows and the AFS name space supports two symbolic link type objects:<br />
<ul>
<li>Mount Points: a directory entry that refers to the root directory of an afs volume.</li>
<li>Symlinks: a directory entry that refers to any absolute or relative target path; traditionally in POSIX notation.</li>
</ul>
The original AFS client for Microsoft Windows was implemented as an SMB 1.2 to AFS gateway service and it pre-existed Windows 2000, the first version of Microsoft Windows to include NTFS 3.0 and support for <a href="https://en.wikipedia.org/wiki/Reparse_point" target="_blank">reparse points</a>. Due to the lack of native OS support, AFS specific command-line tools "fs mkmount", "fs lsmount", "fs rmmount" and "symlink make", "symlink list", and "symlink remove" were provided.<br />
<br />
<a href="http://blog.secure-endpoints.com/2011/09/openafs-ifs-edition-is-finally-here.html" target="_blank">In 2007, Peter Scott and I began work on a Windows Installable File System for AFS</a>. Technically, the new AFS client is a legacy file system redirector driver which has access to the same functionality and flexibility as NTFS. In Windows Vista and beyond Microsoft added support for symbolic links to files and directories within NTFS. They implemented this functionality by combining a directory object or a file object with <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa365503%28v=vs.85%29.aspx" target="_blank">Reparse Point Data</a>. The data consists of a Reparse Point Tag value (assigned by Microsoft) and a <a href="http://msdn.microsoft.com/en-us/library/windows/hardware/ff552012%28v=vs.85%29.aspx" target="_blank">tag specific data structure</a>.<br />
<br />
Microsoft assigns reparse tag values and then includes them in future versions of the ntifs.h header file in the DDK. If you are developing a file system driver for Windows and wish to have a reparse point tag allocated to your driver, follow the instructions at Microsoft's <a href="http://msdn.microsoft.com/en-us/library/windows/hardware/gg463079.aspx" target="_blank">Reparse Point Tag Request page</a>. Microsoft is likely to assign only a single Reparse Point Tag value for your driver. Therefore, I recommend that you request a tag value without the "high latency" or "name surrogate" bits set. You can always combine those bits with your assigned tag value. The DDK ntifs.h header includes macros to test various bits:<br />
<ul>
<li><a href="http://msdn.microsoft.com/en-us/library/windows/desktop/ff549452%28v=vs.85%29.aspx" target="_blank">IsReparseTagMicrosoft()</a></li>
<li>IsReparseTagNameSurrogate()</li>
<li>IsReparseTagValid()</li>
<li>IsReparseTagHighLatency()</li>
</ul>
Reparse Points are a generic mechanism for turning a directory or file object into a reference to something else. The IsReparseTagMicrosoft() macro is important because it determines which data structure will be set on the file system object. A Microsoft Tag will use the <a href="http://msdn.microsoft.com/en-us/library/windows/hardware/ff552012%28v=vs.85%29.aspx" target="_blank">REPARSE_DATA_BUFFER</a> structure whereas a non-Microsoft Tag will use the <a href="http://msdn.microsoft.com/en-us/library/windows/hardware/ff552014%28v=vs.85%29.aspx" target="_blank">REPARSE_GUID_DATA_BUFFER</a> structure. The latter structure can be customized by the driver vendor. I recommend defining a structure that contains a driver specific sub-tag value and a union of purpose specific values. In fact, this is what we did for the AFS redirector.<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">//<br />// Reparse tag AFS Specific information buffer<br />//</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"><br /><span style="font-size: x-small;">#define IO_REPARSE_TAG_OPENAFS_DFS 0x00000037L</span></span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">#define IO_REPARSE_TAG_SURROGATE 0x20000000L</span></span><br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"><span style="font-size: x-small;">// {EF21A155-5C92-4470-AB3B-370403D96369}<br />DEFINE_GUID (GUID_AFS_REPARSE_GUID,<br /> 0xEF21A155, 0x5C92, 0x4470, 0xAB, 0x3B, 0x37, 0x04, 0x03, 0xD9, 0x63, 0x69); </span></span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"><span style="font-size: x-small;"> </span><br />#define OPENAFS_SUBTAG_MOUNTPOINT 1<br />#define OPENAFS_SUBTAG_SYMLINK 2<br />#define OPENAFS_SUBTAG_UNC 3<br /><br />#define OPENAFS_MOUNTPOINT_TYPE_NORMAL L'#'<br />#define OPENAFS_MOUNTPOINT_TYPE_RW L'%'<br /><br />typedef struct _AFS_REPARSE_TAG_INFORMATION<br />{<br /> ULONG SubTag;<br /> union<br /> {<br /> struct<br /> {<br /> ULONG Type;<br /> USHORT MountPointCellLength;<br /> USHORT MountPointVolumeLength;<br /> WCHAR Buffer[1];<br /> } AFSMountPoint;<br /><br /> struct<br /> {<br /> BOOLEAN RelativeLink;<br /> USHORT SymLinkTargetLength;<br /> WCHAR Buffer[1];<br /> } AFSSymLink;<br /><br /> struct<br /> {<br /> USHORT UNCTargetLength;<br /> WCHAR Buffer[1];<br /> } UNCReferral;<br /> };<br />} AFSReparseTagInfo;</span></span><br />
<br />
<span style="font-size: small;"><span style="font-family: inherit;">The motivation behind using reparse points with the AFS redirector is due to limitations of the SMB to AFS gateway. The global AFS name space consists of millions of individual volumes scattered across hundreds or thousands of AFS cells maintained by different organizations. The entire name space can be thought of being rooted at <i>/afs</i> with /afs/<cellname> referring to the volume "root.cell" in the cell whose volume location database servers can be found via a <a href="https://tools.ietf.org/html/rfc5864" target="_blank">DNS SRV query</a> that assumes a one-to-one mapping between <cellname> the c<span style="font-size: small;">ell name </span>and DNS domain name. That is too much information but the point is that when the UNC path \\afs\your-file-system.com\ is evaluated by an AFS client the subset of the AFS name space it refers to is unlikely to be a single volume. This is really important because the Win32 <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa964920%28v=vs.85%29.aspx" target="_blank">GetVolumeInformationByHandleW</a></cellname></cellname> and GetDiskFreeSpaceEx API permits an application to query properties of the volume such as the amount of free space, the volume name, serial number, and system flags.</span></span><br />
<br />
An SMB share UNC path is assumed to refer to a single volume. The SMB 1.2 server does not return different volume information for different paths. It always returns the volume information associated with the root of the share. For AFS this is a nightmare. Each AFS volume will have a unique name and id. They will also have an assigned quota, have a certain number of bytes free, and can be either read-only or read-write. Since the AFS name space and the potential associated storage is infinite but a single volume has finite constraints what should the GetVolumeInformation and GetDiskFree API families return when given an AFS path? In the SMB world, AFS claims there is only one volume "AFS", it is read-write, the size of the volume is 2TB and there is always 1TB free.<br />
<br />
This lying by the SMB to AFS gateway results in some awkward behaviors.<br />
<ul>
<li>Attempts to open a file for write, create a file, truncate a file, or create or remove a directory on a read-only volume returns ERROR_WRITE_PROTECTED even though the volume properties indicate that it is read-write. This results in awkward error messages from applications such as the Explorer Shell which checks the FILE_READ_ONLY_VOLUME<b> </b>flag to determine whether operations such as New ..., Rename, Delete, etc should be removed from menus when the active directory is part of a read-only volume.</li>
<li>Since the volume size is hard coded to be 2TB with 1TB free, it is not possible for applications to create files that are larger than 2TB.</li>
<li>But worse, the Windows SMB client believes that there is 1TB free. It can accept vast amounts of data from the application before it discovers that in fact there is no room on the file server to store it. When the space suddenly disappears the application and the user will receive a "Delayed Write Error" which effectively means "I know I promised you that I would safely store your data for you but I misplaced it and you can't have it back." In other words, a fatal data loss occurs which more often than not will result in application failure and perhaps a monetary loss.</li>
<li>Mount points and symlinks objects are not exposed to Windows applications. The applications believe that there are only directories and files. This has some really negative consequences. When an attempt is made to delete a directory object via the Explorer Shell, the shell will delete not only the directory entry but all of the contents of the directory tree below it. If the directory entry was a reparse point, only the reparse point would be removed leaving the target intact. Instead, the explorer shell attempts to delete everything. When a symlink refers to a file, the symlink should be removed but the target should be left alone. Finally, rename operations should be performed on the mount point or symlink and not on the target object.</li>
</ul>
When Peter and I designed the AFS redirector one of the goals was to address these short comings. Implementing reparse points for AFS mount points and symlinks was key because reparse points attributes on directory objects are the indication to an application that the directory entry and its target may not be in the same volume; therefore, the volume and disk free information must be fetched. Of course, not all applications properly pay attention to reparse point attributes. Application authors frequently assume that a UNC path or a network drive letter mapping must be to an SMB 1.2 share and therefore can only refer to a single volume. I am tempted to produce a wall of shame for applications that get it wrong. However, the failure of application authors to implement the correct behavior in their applications is not a reason for a file system to fail to make the data available to them.<br />
<br />
Up until the 1.7.21(00) release the AFS redirector exposed mount points and symlink data using the Microsoft assigned IO_REPARSE_TAG_OPENAFS_DFS tag value and the AFSReparseTagInfo structure wrapped by the REPARSE_GUID_DATA_BUFFER structure. In principal this should have been fine. Applications should not need to parse the reparse data in order to properly interpret a reparse point. The file attributes of the reparse point object indicate whether its a file or a directory. The high latency bit of the reparse point tag indicates if the target object is located in a Hierarchical Storage Management system that might not be able to queries about the target object in a reasonable period of time. Unfortunately, many applications decide to ignore the FILE_ATTRIBUTE_REPARSE_POINT flag it is returned by a <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa364944%28v=vs.85%29.aspx" target="_blank">GetFileAttributes</a> or <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa364946%28v=vs.85%29.aspx" target="_blank">GetFileAttributesEx</a> call even though these APIs <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa365682%28v=vs.85%29.aspx" target="_blank">explicitly return information about a reparse point and not the target</a>. Some applications follow this behavior when the reparse point tag is not recognized which usually means when IsReparseTagMicrosoft() returns false. Others do it always.<br />
<br />
What happens when the FILE_ATTRIBUTE_REPARSE_POINT bit is discarded and the rest of the file attributes are assumed to apply to the target file? In addition to the file attributes field the GetFileAttributes and FindFirstFile family of functions also return the file size. Now the file size does not have much meaning when the object is a directory but when the target of the reparse point is a file using the wrong file size can be catastrophic. File contents can be truncated when read or overwritten when written. Applications will be mighty confused when they continue to append data to a file but believe the file size never changed. They will be even more confused when they attempt to delete a file only to find that either the reparse point was deleted or the target file but not both. Regardless, bad things happen and that leaves end users with a bad taste in their mouths.<br />
<br />
For the 1.7.22(00) release I decided to significantly flesh out the reparse point handling. For starters, I had been working with Rex Conn on <a href="http://blog.secure-endpoints.com/2013/03/jpsoftwares-take-command-and-openafs.html" target="_blank">adding knowledge of AFS Reparse Points to Take Command</a>. Take Command (and its predecessor 4NT) have long had excellent support for AFS. Take Command distinguishes in the directory list symlinks to files, symlinks to directories and junctions. It does so for AFS as well as NTFS. When Take Command 15 is combined with OpenAFS 1.7.22 users can not only view the target information for AFS mount points and symlinks but can also create them if the Take Command process has the SeCreateSymbolicLinkPrivilege which permits the <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa363866%28v=vs.85%29.aspx" target="_blank">CreateSymbolicLink</a> API to create a symlink to a directory or a file.<br />
<br />
CreateSymbolicLink encapsulates the following operations:<br />
<ol>
<li>Determine the type of the target object (file or directory)</li>
<li>Create either a directory or a file object to match the target type </li>
<li>Construct the REPARSE_DATA_BUFFER structure using the IO_REPARSE_TAG_SYMLINK tag</li>
<li>Issue the <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa364595%28v=vs.85%29.aspx" target="_blank">FSCTL_SET_REPARSE_POINT</a> to assign the reparse data to the directory or file</li>
<li>Close the handle to the file or directory</li>
</ol>
In other words, the CreateSymbolicLink only creates Microsoft symlinks. Since the tag type is in the data structure it is fairly easy for a file system driver to accept both the IO_REPARSE_TAG_SYMLINK data and the file system specific data. Once implemented it became possible for the Take Command <a href="http://jpsoft.com/help/mklink.htm" target="_blank">MKLINK</a> command to be used to create symlinks within AFS volumes.<br />
<br />
For the longest time I resisted squatting on Microsoft's tag and data structure but as long as <a href="http://msdn.microsoft.com/en-us/library/windows/hardware/ff544836%28v=vs.85%29.aspx" target="_blank">FSCTL_GET_REPARSE_POINT</a> returns the IO_REPARSE_TAG_OPENAFS_DFS data many applications do the wrong thing. There simply wasn't any choice from the perspective of application compatibility. As a result in the 1.7.23(00) release AFS Symlinks will be exposed using the IO_REPARSE_TAG_SYMLINK instead of the IO_REPARSE_TAG_OPENAFS_DFS tag. Only AFS Mount Points will be exposed using the IO_REPARSE_TAG_OPENAFS_DFS tag.<br />
<br />
With this change not only can Take Command understand AFS symlinks but so can the Explorer Shell, the Cygwin POSIX environment, the PowerShell Community Extensions, and anything else that can manipulate NTFS symlinks. Even Hermann Schinagl's <a href="http://schinagl.priv.at/nt/hardlinkshellext/hardlinkshellext.html" target="_blank">Link Shell Extension</a>.<br />
<br />
One might think that everyone might be happy at this point except that end users are still faced with applications that do not know how to properly interpret Microsoft Reparse Points. One example is Microsoft's own .NET. In Microsoft's <a href="http://msdn.microsoft.com/en-us/library/bb513869.aspx" target="_blank">How to: Iterate Through a Directory Tree (C# Programming Guide)</a><i><a href="http://msdn.microsoft.com/en-us/library/bb513869.aspx" target="_blank"> </a></i>the author explains:<br />
<br />
<div style="text-align: center;">
<i> NTFS file systems can contain <span class="parameter">reparse points</span> in the form of<span class="parameter"> junction points</span>, <span class="parameter">symbolic links</span>, and <span class="parameter">hard links</span>. The .NET Framework methods such as <a href="http://msdn.microsoft.com/en-us/library/system.io.directoryinfo.getfiles.aspx">GetFiles</a> and <a href="http://msdn.microsoft.com/en-us/library/system.io.directoryinfo.getdirectories.aspx">GetDirectories</a>
will not return any subdirectories under a reparse point. This behavior
guards against the risk of entering into an infinite loop when two
reparse points refer to each other. In general, you should use extreme
caution when you deal with reparse points to ensure that you do not
unintentionally modify or delete files. If you require precise control
over reparse points, use platform invoke or native code to call the
appropriate Win32 file system methods directly.</i></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
That is not the only thing that .NET does. It also hides the FILE_ATTRIBUTE_REPARSE_POINT bit in the file attributes<i> </i>from applications and returns the file size of the reparse point data. As a result parsing a file stream through a symlink to a file results in the data truncation bug. If the .NET team truly wanted to hide reparse points from application developers, they should have substituted the file attribute information for the target files in all directory enumeration output. Providing compatibility for broken applications such as this should not be the responsibility of a file system. However, applications are more important to end users than file systems and if the applications do not work, the file system will be replaced (or never adopted in the first place.) As a result a future version of the Windows AFS client will probably include a mechanism for requesting that Symlinks to Files be reported as Files and not IO_REPARSE_TAG_SYMLINK reparse points.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
While on the subject of Symlinks and Windows I would also like to discuss other approaches to implementing symlinks on Windows that have been implemented over the years. As I mentioned, Cygwin supports Microsoft IO_REPARSE_TAG_SYMLINK reparse points as Symlinks.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">$ ls -l af*<br />lrwxrwxrwx 1 Administrators None 9 Sep 19 2012 afs -> //afs/all</span></span></div>
<div style="text-align: left;">
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"></span></span></div>
<div style="text-align: left;">
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"></span></span></div>
<div style="text-align: left;">
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"></span></span></div>
<div style="text-align: left;">
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;"><br /></span></span><span style="font-size: small;"><span style="font-family: inherit;">However, "ln <span style="font-size: small;">-s target link" cannot be used to create </span></span></span>IO_REPARSE_TAG_SYMLINK reparse points. This is because "ln -s" creates Cygwin specific symlink objects in the file system. Instead of using reparse points, Cygwin writes a file that begins with a cookie "!<symlink>", <symlink>followed by a Unicode BOM and the target path in Unicode. The file has the FILE_ATTRIBUTE_SYSTEM attribute set as an indicator that the file might be a Cygwin symlink.</symlink></symlink></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
On Windows Server, Microsoft provides both a POSIX environment, Interix, and an NFSv3 implementation. Interix implements symlinks similarly to Cygwin except that the cookie is "IntxLNK\1" and the format of the target path is different. While the NFS implementation identifies its Symlinks by use of an extended attribute, </div>
"NfsSymlinkTargetName" which stores the target path.<br />
<br />
<div style="text-align: left;">
There is one more type of link object in Windows which is sometimes interpreted as a symlink. That is the Windows Shortcut .LNK file which is interpreted by the Windows Shell. One thing that is quite odd is that Cygwin at the present time is capable of writing .LNK files but is not capable of creating IO_REPARSE_TAG_SYMLINK reparse points.<br />
<blockquote class="tr_bq">
[Update: Corinna Vinschen of Cygwin indicates the reason is that POSIX paths can be stored in .LNK files but IO_REPARSE_TAG_SYMLINK fields require the use of Windows file paths and foreknowledge of the target type.]</blockquote>
<br /></div>
<div style="text-align: left;">
Microsoft Windows Reparse Points are an extremely powerful and flexible mechanism for implementing file system specific control points. Much more powerful than the traditional POSIX symlink although much more complex. An example of a tool that is more powerful because of its reparse point awareness is Microsoft's "Robust File Copy for Windows" tool better known as <a href="http://technet.microsoft.com/en-us/library/cc733145%28v=ws.10%29.aspx" target="_blank">RoboCopy</a>.
RoboCopy can be configured to exclude junction points (/XJ) by which
they mean reparse points; exclude junction points for directories but
not files (/XJD); exclude junction points for files (/XJF); and even
copy the symlink instead of the target (/SL). All of these switches
work with the Windows AFS client.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
My final comment for this post is that evaluating AFS directories which contain symlinks is an extremely expensive operation. Unlike the POSIX equivalents, a Windows directory enumeration always returns the <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa365740%28v=vs.85%29.aspx" target="_blank">WIN32_FIND_DATA</a> structure for each directory entry which contains the file attributes. A reparse point to a directory must have the FILE_ATTRIBUTES_DIRECTORY bit set and a reparse point to a file must not. All of the other fields of the WIN32_FIND_DATA structure can be determined from the reparse point itself but AFS does not have a method of hinting the client what the type of the target object is. As a result, the target path must be evaluated for each and every directory listing. A directory such as /afs/andrew.cmu.edu/ which contains more than 30,000 relative symlinks to directories will require nearly twice that number of RPCs to the file server to complete the directory enumeration. Something to think about when planning your AFS name space.</div>
<div style="text-align: left;">
<br /></div>
Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com1tag:blogger.com,1999:blog-3333505129375354922.post-61656461363210255912013-03-14T16:04:00.003-04:002013-03-24T11:14:16.720-04:00JPSoftware's Take Command and OpenAFSI have been a user of Rex Conn's replacement command processors since the early days of <a href="https://en.wikipedia.org/wiki/4DOS" target="_blank">4DOS</a>. When I switched to OS/2 and began work on <a href="http://www.columbia.edu/kermit/os2.html" target="_blank">OS/2 C-Kermit</a>, 4OS2 was there for me. When I added <a href="https://en.wikipedia.org/wiki/Rexx" target="_blank">REXX </a>language support to OS/2 C-Kermit, 4OS2 added it as well. When I moved to Windows NT, there was 4NT waiting for me. In 2003 I began my work on OpenAFS for Windows (WinAFS) which at the time was implemented as a locally SMB server proxy to the AFS name space. Before I started work on the WinAFS client, the only method of accessing the AFS name space was by use of Windows drive letter mappings. It wasn't possible to consistently access the AFS name space via a UNC path. It wasn't until the OpenAFS 1.3.66 release in July 2004 that it became possible to live entirely in a UNC \\AFS\cellname\path\ world except that the Microsoft command processor (cmd.exe) does not permit UNC paths to be the current directory. 4NT on the other hand supported UNC paths as the current directory for years and it was a natural fit. Drive letter mappings suddenly became no longer necessary for my day to day activities.<br />
<br />
For those readers that are not long time AFS users there are some important things to understand about the AFS name space. Unlike a Windows file share, the UNC path \\server\share\ does not refer to a single on-disk volume on the specified machine. Instead with AFS UNC paths \\afs\cell\ refers to the root directory of a volume named <i>root.cell</i> in the specified AFS cell. AFS UNC paths are location independent and do not signify on which physical machines the data is stored. In fact, <i>root.cell</i> is in most cases a geographically replicated volume. In addition to directories and files, AFS supports mount points and symlinks as first class file system types. An AFS mount point is an object that refers to the root directory of another AFS volume and symlinks can refer to any absolute or relative file path.<br />
<br />
The AFS name space can therefore be viewed as a directed graph of volumes joined to other volumes where each volume contains a directory tree. Volumes can be either read/write or read-only snapshots of a read/write volume. Volumes can be assigned quotas or can be permitted to grow to fill the entire partition on which they are stored. AFS volumes can be migrated from server to server while in use and the amount of free space can change as a result of the volume being moved. The AFS name space is therefore a challenge to use when it is accessed via the SMB protocol.<br />
<br />
SMB file shares were designed prior to the existence of NTFS Junctions and NTFS Symlinks (added in Vista and Server 2008). The assumption is that there is only one volume on one partition located at the other end of a UNC path. Obtaining the free space is most often performed using <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa364935%28v=vs.85%29.aspx" target="_blank">GetDiskFreeSpace</a> which can only refer to root directories and not <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa364937%28v=vs.85%29.aspx" target="_blank">GetDiskFreeSpaceEx</a> which can refer to arbitrary paths. Even the MSDN documentation for these APIs states that the reason to use the Ex version is to avoid unnecessary arithmetic whereas the most important reason for using the Ex version in my opinion is that it works with complex name spaces constructed by NTFS junctions and AFS mount points.<br />
<br />
Since the AFS name space is made up of a potentially infinite number of volumes joined together via mount points and volumes can sometimes be read/write and other times be read-only, how should the WinAFS SMB server respond when it is asked to report the total disk space and total free disk space? Its impossible to provide an accurate value for either of these.<i> </i>As a result the AFS SMB server would simply lie. It would report an arbitrarily large number for the partition size and the free space. Free space was always reported even when there was absolutely none.<br />
<br />
Which brings us back to JPSoftware and 4NT. While it wasn't possible for arbitrary volume information to be obtained via the Win32 API, the AFS <i>fs</i> command obtains this information via the afs <i>path ioctl</i> interface. In September 2005 Rex Conn added OpenAFS specific knowledge and functionality to 4NT 7.0:<br />
<ol>
<li>The command parser understands UNIX style inputs
/afs/your-file-system.com/user/jaltman
and automatically converts them to UNC notation
\\afs\your-file-system.com\user\jaltman when the first component matches the AFS "NetbiosName".</li>
<li>The command language contains @AFSCELL, @AFSMOUNT, @AFSPATH,
@AFSSYMLINK, @AFSVOLID, @AFSVOLNAME functions which operate on paths and return AFS specific data.</li>
<li>Free space computations use AFS volume information so it is accurate even when the Win32 GetVolumeInformation() call executed over SMB would not be.</li>
</ol>
Over the last five years as the AFS Redirector has been developed 4NT (now called Take Command) has been a constant companion. One of my favorite features of Take Command directory listings is its awareness of Reparse Points. For example:<br />
<div class="separator" style="clear: both; text-align: center;">
<span id="goog_1228041477"></span><span id="goog_1228041478"></span><a href="http://1.bp.blogspot.com/-WqE3Ed1Iq1w/UU8WwVKvNGI/AAAAAAAAAHo/oULp_oFU1O4/s1600/dir-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://lh3.ggpht.com/-WqE3Ed1Iq1w/UU8WwVKvNGI/AAAAAAAAAHo/oULp_oFU1O4/s1600/dir-1.png" /></a></div>
As you can see, directory listing expand the target of NTFS Junctions and Symlinks providing the target information. I have for the longest time wanted this behavior for AFS. Unfortunately, up until a late TC 14.03 build, Take Command did not understand how to parse the AFS Reparse Point data. Now that it does we get the same useful output:<br />
<br /><div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-p9W2PIyzyFU/UU8XZe8GkMI/AAAAAAAAAH0/dFTU2qwRHRI/s1600/dir-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://lh3.ggpht.com/-p9W2PIyzyFU/UU8XZe8GkMI/AAAAAAAAAH0/dFTU2qwRHRI/s1600/dir-2.png" /></a></div>
<span style="font-size: small;"><span style="font-family: inherit;">Although not shown, symlink<span style="font-size: small;"> to file targets are displayed as well.</span></span></span><br />
<span style="font-size: small;"><span style="font-family: inherit;"><span style="font-size: small;"> </span></span></span> <br />
With the release of Take Command 15.0 and OpenAFS 1.7.22 the circle has now been completed. Not only can Take Command display AFS mount point and symlink targets, but Take Command's MKLINK command can be used to create symlinks to both files and directories, and the DEL and RMDIR commands can be used to remove them.<br />
<br />
Take Command's GLOBAL command can either cross [/J] or not cross [/N] junctions as specified.<br />
<br />
Finally, Take Command properly uses GetVolumeInformationByHandle() to obtain volume information. As a result the built-in AFS functions operate even when AFS is accessed via an NTFS directory symlink.<br />
<br />
I recommend Take Command for any user of OpenAFS that relies upon the command shell.<br />
<br />
For further information on Take Command visit the JP Software web site at http://jpsoft.com/.<br />
<ol>
</ol>
<div wrap="">
</div>
Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-22654699192499008332012-11-05T16:06:00.004-05:002012-11-05T23:39:29.713-05:00OpenAFS Windows IFS Thirteen Months Later<a href="http://blog.secure-endpoints.com/2011/09/openafs-ifs-edition-is-finally-here.html">On 18 September 2011,</a> I discussed the release of the first OpenAFS release that included a native installable file system redirector. It is often said that it takes ten developer years to shake out all of the bugs and performance glitches in a new file system. The last year has certainly seen its fill of BSODs, deadlocks, hiccups, and application interoperability issues. Today, I am releasing version 1.7.18. Over the last thirteen months more than 750 changes have been implemented improving performance, stability, and application compatibility. This post will highlight some of the challenges and lessons learned in the process.<br />
<br />
<u>Antimalware Filter Driver compatibility</u><br />
The vast majority of problems that end users have experienced with the AFS redirector have been related to interactions with Anti-Virus and other forms of content scanners which install filter drivers on the system. Life would be much easier if there was a standard set of hooks that these products could use to scan files and deny access, quarantine, or otherwise alter the normal application data access patterns. Unfortunately that is not the case and learning what works and what doesn't has often been left to trial and error.<br />
<br />
Since AFS is a network file system that relies upon credentials that are independent of the local operating system there are added complexities. For example, when Excel opens a spreadsheet file it uses the AFS tokens which are available to the active logon session. The anti-virus service on the other hand is running as an NT service as the SYSTEM or other account in a different logon session. As such, it does not have access to the user's AFS tokens unless the requests to scan the file content is performed by borrowing the File Object from Excel or impersonating the Excel process' security context. Most anti-virus products do impersonate the calling thread or borrow the File Object but not all do. Versions of Microsoft Security Essentials prior to 2.0 did not and it was a significant problem for OpenAFS.<br />
<br />
Anti-virus scanners can choose to scan during the CreateFile operation and during the CloseHandle operation (aka File Cleanup.) The challenge here for the AFS redirector is that it must hold various locks in order to protect the integrity of the data and provide cache coherency with the file server managed data versions. Anti-virus scanners can hijak the thread performing the CreateFile or Cleanup and inherit the locks that are already held or they can spawn a worker thread to re-open the file perform a scan and close it again while the application initiated CreateFile or Cleanup is blocked. Any locks that are held across CreateFile or Cleanup which are required by the anti-virus worker thread will result in a deadlock. Failure to hold the locks can result in data corruption. Sophos and Kaspersky were two of the most challenging products to learn to interact with safely.<br />
<br />
Microsoft periodically organizes <a href="http://msdn.microsoft.com/en-us/library/windows/hardware/hh582249.aspx" target="_blank">File System Filter Driver PlugFests</a> which provide file system developers, anti-virus vendors, encryption products, content scanners, and others to test their forthcoming products against Microsoft's upcoming operating system releases. The PlugFest is also an opportunity for third-party vendors to perform interoperability testing with each other. It was unfortunate that due to increased secrecy regarding the development of Windows 8 and Server 2012 that Microsoft was unable to hold a PlugFest for more than a year. But in 2012 there were two events in February and August.<br />
<br />
The February PlugFest was the first opportunity to interop with a broad range of vendors since the release of 1.7.1. At that event every Interop session was a painful experience. During that week 1.7.7 was scheduled to be released but it had to be pulled because of the many problems (deadlocks, BSODs, and data corruption) that were identified during the interop testing sessions.<br />
<br />
This past August's experience was the complete opposite. The code that would become the 1.7.17 release including Windows 8 and Server 2012 specific functionality was tested. Other than a minor error that was uncovered during the first interop session with Microsoft's own anti-virus engine used in Security Essentials and Windows Defender there was not a single hiccup the rest of the week. As it turns out, the AFS redirector was the only non-Microsoft file system to implement all of the required new interfaces for Windows 8.<br />
<br />
<u>Application Compatibility</u><br />
Of course, compatibility with deployed applications is the goal. Whenever possible applications should be unaware that its data is being stored in AFS as opposed to Windows built-in file systems such as NTFS and CIFS. This challenge is made more complicated by the fact that most applications do not implement feature tests for optional file system APIs. Instead they just assume that every feature implemented by NTFS or CIFS will be available everywhere. The deciding factor between whether the file system is local or remote is often decided by whether or not UNC path notation is used. Things should become easier for non-Microsoft file systems now that Microsoft has introduced <b>ReFS</b>, a new file system that does not implement many features of NTFS including transactions, short names, extended attributes or alternate data streams; none of which are implemented by the AFS redirector.<br />
<br />
Still, it is worth noting that the AFS redirector is a very complete implementation of the NTFS and CIFS feature set including support for CIFS Pipe Services such as WKSSVC and SRVSVC and a full implementation of the Network Provider API. Both the Pipe Services and the Network Provider API are used by applications to browse the capabilities of the network file system and the available resources such as server and share names. The Network Provider API is also responsible for managing drive letter to UNC path mappings and a path name normalization. One example of a Network Provider incompatibility was the failure to implement network performance statistics which resulted in periodic 20 second delays from within the Explorer Shell.<br />
<br />
<u>Reparse Points</u><br />
One of the most significant visible changes between the SMB gateway interface and the native AFS redirector is the use of file system Reparse Points to represent AFS Mount Points and Symlinks. Unlike POSIX symlink which are unstructured data, a Windows File System Reparse Point is a tagged structured data type. Microsoft maintains a registry of all of the tag values and which organization they are assigned to. More than 50 reparse point tags have been registered and OpenAFS is the proud assignee of IO_REPARSE_TAG_OPENAFS_DFS (0x00000037L). The OpenAFS Reparse Tag Data has three sub-types (Mount Point, Symlink, UNC Referral) which are used to export the target information for each.<br />
<br />
When the SMB gateway was used, the entire AFS name space appeared to applications as a single volume exported as as single Windows File Share. It was not possible for Windows to report volume information (quota, readonly status, etc) or detect out of space conditions prior to the application filling the Windows page cache. Now that reparse points are in use, Windows applications can recognize that a path might have crossed from one volume to another. Tools such as <b>robocopy</b> that are <i>Junction</i> (aka Reparse Point) aware can perform operations without crossing volume boundaries.<br />
<br />
While this is a major improvement in capability, it is also a dramatic change in behavior for applications. Some applications rely upon the assumption that a Windows File Share can only refer to a single volume and further assume that any file path using UNC notation is a path to a Windows File Share. Such applications can become confused when they query the volume information of <i>\\afs\example.org\</i> and told that the volume is READ_ONLY when the full target path <i>\\afs\example.org\user\j\johndoe\</i> is not. This is a deficiency in the application and not a fault of the file system.<br />
<br />
One downside of the reparse point model is that applications need to understand the format of the structured data to make use of it. Tools such as <a href="http://jpsoft.com/" target="_blank">JPSoftware's Take Command </a>are reparse parse point aware but can not at present properly display the target information. The same is true for <a href="http://sourceware.org/cygwin/" target="_blank">Cygwin</a> and related tools.<br />
<br />
<u>Authentication Groups</u> <br />
The SMB gateway client associated credentials with Windows account usernames (or SIDs). The AFS redirector tracks process creation and associates credentials with Authentication Groups (AG). Each process inherits an AG from the creating thread and can create additional AGs to store alternate sets of credentials. When background services such as <i>csrss.exe</i> and <i>svchost.exe</i> execute tasks on behalf of foreground processes they impersonate the credentials of the requesting thread. By impersonating the caller, the background thread informs the AFS redirector which credentials should be used.<br />
<br />
Sometimes a mistake is made and the background service fails to impersonate the caller and instead attempts to rely upon the service's own credentials to perform its job. This is the case with <i>conhost.exe</i> when it attempts to access or manipulate the contents of the "Command Prompt.lnk" shortcut. As a result the contents of cmd.exe shortcuts are ignored when initiating command prompt console sessions.<br />
<br />
<u>When Will 1.8 Ship?</u><br />
Users frequently ask "when will 1.8 ship? I don't want to deploy the new OpenAFS client until it is production quality." The reason that the OpenAFS client is 1.7.x and not 1.8.x has less to do with stability than it has to do with the rate of change and unfinished work. The Windows platform has new releases issued every one to two months whereas the rate of issue for the servers and UNIX clients is one every six to twelve months. The rate of change to support new features or improve compatibility and performance on Windows is significantly higher. Nearly 1/3 of all patches contributed to OpenAFS.org are new functionality for Windows. Please do not focus so much in the version label.<br />
<br />
1.8 will be issued when the rate of change in the Windows client drops to the point where a new release each month is no longer desirable. The two most significant areas of work that need to be addressed before a 1.8 release are in the Kerberos bindings and the Installer. At present, the 1.7.x binaries are built directly against the MIT KFW 3.2 libraries. This permits OpenAFS to work with KFW 3.2 and the KFW translation layer provided by Heimdal 1.5. However, the KFW 3.2 API does not permit fined grained control over the use of DES encryption types nor is it guaranteed to work with future KFW releases from MIT. The installer requires ease of use improvements. The user should not be prompted when files are in-use but should always be prompted to provide a cell name unless the installation is an upgrade.<br />
<br />
<u>What Comes After 1.8?</u><br />
With large scale deployment comes operational experience. The AFS Redirector design has been shown to have weaknesses that result in a larger than desired in-kernel memory footprint. There are three areas in which a redesign would be desirable:<br />
<br />
1. The File Control Blocks (FCB) and the Object Information Control Blocks (OICB) are bound to one another even though they could very well have different life spans. An FCB must exist as long as there is an open HANDLE. Multiple open handles for the same file system object refer to the same FCB. The FCB contains metadata about the file object that is specific to the file system in-kernel. It tracks the allocated file size, the list of data extents that are present in-kernel, etc. For each FCB there must exist an OICB which contains the AFS specific meta data associated with the file object including AFS data version, AFS FileID, etc. While an OICB must exist for an FCB, it does not have to be the other way around.<br />
<br />
The mutual binding of the OICB and the FCB makes garbage collection more difficult than it needs to be. Some of the race conditions that were fixed in the 1.7.18 release were the result of this complexity. One of the important goals of a redesign is to break this mutual dependency and instead only maintain a reference from the FCB to the OICB and not the other way around. Doing so will permit FCBs to be garbage collected when the last handle is closed and OICB objects to be garbage collected with their active reference counts reach zero. The garbage collection worker thread will hold fewer locks and have a smaller impact on file system performance.<br />
<br />
2. The Directory Entry Control Blocks (DECB) also maintain a reference to the OICB. In fact, each time a directory is enumerated to satisfy FindFirst/FindNext API requests, not only is a DECB allocated but an OICB is as well. Permitting the OICB to be allocated only when a FCB is allocated instead of as part of directory enumeration will reduce the in-kernel memory footprint.<br />
<br />
3. Directory enumeration is currently performed for the entire directory not only when the directory object is opened by an application but also when a FindFirst API is issued for a non-wildcard search. The vast majority of FindFirst searches are non-wildcard searches for explicit names. Instead of populating the full contents of the directory in-kernel, the memory footprint can be further reduced by pushing those queries to the <i>afsd_service</i> process.<br />
<br />
4. File data is exchanged between the <i>afsd_service</i> and the Windows page cache by sharing a memory-mapped backing store between the AFS Redirector and the <i>afsd_service</i>. The control over specific file extents is managed by a reverse ioctl interface between the redirector and the user-land service. This protocol is racy and can result inefficient exchanges of control. Replacing the existing protocol with one that tracks extent request counts and active reference counts will reduce wasteful exchanges and improve data throughput.<br />
<br />
These proposed changes are a significant undertaking and they will not appear in the 1.7.x/1.8.x release series. <br />
<br />
<u>Credits</u><br />
The OpenAFS for Windows client is the product of <a href="http://www.your-file-system.com/" target="_blank">Your File System, Inc.</a>, <a href="http://www.kerneldrivers.com/" target="_blank">Kernel Drivers, LLC</a>, and <a href="https://www.secure-endpoints.com/" target="_blank">Secure Endpoints, Inc</a>. To support the development of the OpenAFS for Windows client, please purchase support contracts or make donations. The recommended donation is $20 per client installation per year.<br />
<br />Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com1tag:blogger.com,1999:blog-3333505129375354922.post-36374291111451329662012-11-03T11:12:00.001-04:002012-11-03T11:12:22.793-04:00I want my Windows IFS OpenAFS Client to be fastIn 2008 I wrote <a href="http://blog.secure-endpoints.com/2008/03/i-want-my-openafs-windows-client-to-be.html" target="_blank">I want my OpenAFS Windows client to be fast</a> which described the options I used to tune the Windows OpenAFS client that used the SMB server gateway. As of this writing the current release of OpenAFS for Windows is 1.7.18 which is based upon a native Windows Installable File System, AFSRedir.sys. This post is an update describing the configuration values I use with the native redirector interface.<br />
<br />
The most important related to throughput fall into
two categories: <br /><br /><u>How much data can I cache?</u><br />CacheSize<br />Stats<br /><br /><u>How Fast Can I Read and Write?</u><br />BlockSize<br />ChunkSize<br />Daemons <br />
RxUdpBufSize<br />SecurityLevel<br />
ServerThreads<br />TraceOption<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />All of these options are described in Appendix A of the <a href="http://docs.openafs.org/ReleaseNotesWindows/index.html#appendix_a.html" target="_blank">Release Notes</a>. Here are the values I use:<br /><br />CacheSize = 4GB (64-bit) 1GB (32-bit)<br />
Stats = 60,000 (64-bit) 30,000 (32-bit)<br /><br />BlockSize = 4<br />
ChunkSize = 21 (2MB)<br />
RxUdpBufSize = 12582912<br />
SecurityLevel = 1 (when I need speed I use "fs setcrypt" to adjust on the fly)<br />
ServerThreads = 32<br />
TraceOption = 0 (no logging)<br />
<br />
None performance related options that I use:<br />
<br />
DeleteReadOnly = 0 (do not permit deletion of files with the ReadOnly attribute set)<br />
FollowBackupPath = 1 (mount points from .backup volumes search for .backup volumes)<br />
FreelanceImportCellServDB = 1 (add share names for each cell in CellServDB file)<br />
GiveUpAllCallbacks = 1 (be nice to file servers)<br />
HideDotFiles = 1 (add the Hidden attribute to files beginning with a dot)<br />
UseDNS = 1 (query DNS<br />
<br />Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-14467096628508618862011-10-02T11:38:00.002-04:002011-10-02T11:38:31.741-04:00Heimdal: Now Playing on Windows Near YouToday, <a href="http://article.gmane.org/gmane.comp.encryption.kerberos.heimdal.announce/24">Heimdal 1.5.1 was announced</a> including support for Microsoft Windows. Asanka Herath gave an <a href="http://workshop.openafs.org/afsbpw10/thu_3_2.html">excellent presentation</a> on the design plans at the <a href="http://workshop.openafs.org/afsbpw10/">2010 AFS and Kerberos Best Practices Workshop</a>. The Heimdal port began in December 2008 in response to several motivations:<br />
<ol>
<li>Several large Secure Endpoints clients were experiencing significant upgrade problems with MIT Kerberos for Windows due to backward compatibility problems between versions 2.6.x and 3.x. The problems were due to what is affectionately known as <a href="http://en.wikipedia.org/wiki/DLL_Hell">DLL Hell</a>. Applications built against old versions of KFW do not work with newer versions and vice versa because the list of function exports and the ordinal bindings changed. To make matters worse, it isn't possible to have more than one version of KFW installed on a system at any given time. This is because KFW libraries must be installed in a directory listed in the system PATH environment variable. To address this problem Secure Endpoints issued a <a href="http://www.secure-endpoints.com/kfw/proposal-kfw-assemblies.html">proposal to MIT</a> in July 2008 that KFW be converted to use Windows <a href="http://en.wikipedia.org/wiki/Side-by-side_assembly">Side-by-side Assemblies</a>. This proposal along with others to improve Network Identity Manager went over like a lead balloon at the <a href="http://www.kerberos.org/">Kerberos Consortium</a>.</li>
<li>Secure Endpoints began work on incorporating Hardware Secure Modules such as <a href="http://www.thales-esecurity.com/en/Products/Hardware%20Security%20Modules.aspx">Thales' nShield</a> into a <a href="http://www.secure-endpoints.com/kca/kca_service.html">Kerberized Certificate Authority</a> that could be approved of by <a href="http://www.tagpma.org/">The Americas Grid Policy Management Authority</a>. TAGPMA requires that all certificate authorities store their keys in hardware. This naturally led us to wonder if we could do the same for a Kerberos <a href="http://en.wikipedia.org/wiki/Key_distribution_center">Key Distribution Center (KDC)</a>. Heimdal already supported the <a href="http://www.openssl.org/docs/crypto/crypto.html">OpenSSL crypto library </a>which could be used with the nShield HSM. Asanka presented <a href="http://workshop.openafs.org/afsbpw09/wed_3_3.html">our ideas</a> at the <a href="http://workshop.openafs.org/afsbpw09">2009 AFS and Kerberos BPW</a>.</li>
<li>Finally, OpenAFS needed a number of changes to Kerberos and GSS-API in order to be able to implement the rxgk security class. There have been numerous presentations on the need for rxgk over the years. Lov<span id="goog_177820618"></span><span id="goog_177820619"></span>e gave a talk in <a href="http://workshop.openafs.org/afsbpw07/talks/lha.pdf">2007</a>, Simon gave one in <a href="http://workshop.openafs.org/afsbpw10/fri_1_1.html">2010</a>, and another in 2011. In fact, the rxgk work began back in 2004 at an AFS hackathon in Sweden. Implementing rxgk requires that all supported platforms provide a <a href="http://tools.ietf.org/html/rfc3961">Kerberos Crypto Framework (RFC 3961)</a> and the <a href="http://tools.ietf.org/html/rfc4401">GSS Pseudo-Random Function (RFC 4401)</a>. MIT Kerberos doesn't export a 3961 compatible crypto framework in any version and with the failure to put any resources behind the Windows product there was no GSS PRF support. The OpenAFS development community has found the Kerberos Consortium quite difficult to work with whereas Heimdal welcomed the proposed changes with open arms. Heimdal redesigned their repository layout to make it possible for OpenAFS to import core functionality such as the cross-platform compatibility library libroken, the hcrypto library, and the rfc3961 framework. This in turn permits OpenAFS developers to focus on building a best of breed distributed file system and avoid the need to build and support a Kerberos v5 and GSS-API implementation. Heimdal is more than just a Kerberos implementation which will permit OpenAFS to more easily support non-Kerberos authentication mechanisms once rxgk is deployed.</li>
</ol>
The Secure Endpoints distribution of Heimdal is more than just a port to Microsoft Windows. In order to properly address the needs of existing KFW users and developers, the Heimdal distribution includes a set of KFW 3.x compatible DLLs that act as a shim layer that converts requests issued using the MIT API and forwards them to the Heimdal assembly for processing.<br />
<br />
For developers, Secure Endpoints is now distributing a <a href="https://github.com/secure-endpoints/heimdal-krbcompat">Kerberos Compatibility SDK</a> that will permit applications to be developed which can work seamlessly regardless of whether Heimdal or MIT Kerberos in installed on the system. OpenAFS and all future Secure Endpoints applications such as Network Identity Manager and the Kerberized Certificate Authority will be built against this SDK. Applications built against the SDK first search for a compatible Heimdal assembly. If an assembly is not installed on the system, KFW DLLs are searched for in the PATH and manually loaded.<br />
<br />
One important difference between Heimdal and KFW related to how credential caches and keytabs are implemented. Instead of compiling all supported cache and keytab types into the Heimdal libraries, Heimdal loads credential caches and keytabs as registered plug-ins. This permits weak cache and keytab implementations to be removed on systems where they shouldn't be supported and permits new implementations to be developed independently of the Heimdal distributions. This functionality is going to become very useful for OpenAFS users on Microsoft Windows now that OpenAFS 1.7.x includes <a href="http://docs.openafs.org/ReleaseNotesWindows/ch03s54.html">native authentication groups</a>. For the first time it will be possible to develop secure Kerberos credentials cache and keytab implementations whose contents become accessible to processes that are impersonating other processes something that has only been possible with the <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa377942%28v=vs.85%29.aspx">Microsoft Kerberos SSP</a> up to this point.<br />
<br />
All in all, the release of Heimdal for Microsoft Windows is an important step forward.<br />
<br />
<br />
<ol>
</ol>
Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-75570469172409545382011-09-18T03:03:00.000-04:002011-09-18T03:04:01.438-04:00The OpenAFS IFS Edition is Finally HereI first proposed the idea of a native redirector based OpenAFS
client at the 2004 AFS Best Practice Workshop held at SLAC in March 2004 as part of my <a href="http://www-conf.slac.stanford.edu/AFSBestPractices/Slides/jeffaltman.pdf">Future Directions for th AFS Client on Windows</a> talk. The talk was my first public assessment of the OpenAFS client for Microsoft Windows. In fact it was my first presentation as an OpenAFS gatekeeper having only been working with the code base for four months. In that time a large amount of low hanging fruit was picked but there was so much more to be done. I wonder how many of the attendees actually believed that even half of the known issues would be resolved in the years to come let alone an installable file system driver. Prior to 1.3.60 it wasn't even possible to deploy OpenAFS clients on Microsoft Windows with a uniform name space. Instead of accessing resources via the \\AFS\cellname UNC path, all paths were accessed via \\%HOSTNAME%-AFS\ALL\cellname where %HOSTNAME% was the local machines Netbios name.<br />
<br />
By September 2004, CITI at the University of Michigan agreed to fund a graduate student, Eric Williams, to develop an IFS interface for the OpenAFS cache manager. Eric's implementation was delivered during the Summer of 2005. The <a href="http://git.openafs.org/?p=openafs.git;a=commit;h=3cc3cedba03827ba3796740a40f2f75bba85a44e">first code</a> dropped in mid-June and the <a href="http://git.openafs.org/?p=openafs.git;a=commit;h=fc0ca363da52144628a35abae30315257bbf76a8">final code</a> dropped in early August. Eric's implementation was built using Microsoft's IFS Kit and implemented a mini-redirector interface. It provided support for anonymous \\AFS access without the use of a loopback adapter but did so by mimicking the SMB message flows. Eric was able to demonstrate 5x performance improvements over the SMB interface. At the end of the Summer Eric moved onto other obligations and work on the redirector interface stalled.<br />
<br />
On August 28, 2006, I was introduced to Peter Scott of <a href="http://www.kerneldrivers.com/">Kernel Drivers</a>. Peter is a Microsoft MVP and a world renowned Windows kernel specialist with a passion for file systems. Peter volunteered to review the goals I had laid out for the OpenAFS client and the code that Eric Williams had developed. Three major issues were identified during the review. First, OpenAFS is a caching file system and the method used to deliver data to satisfy paging requests made it impossible to guarantee that data cached by Windows would be purged in response to a data version change produced by another machine. Second, the mini-redirector interface underwent a significant change with the introduction of Microsoft Vista and maintaining a common code base across XP, Vista and beyond would have been impossible. Third, the implemented functionality was sufficient to create, open, close, read from, write to, etc. but the OpenAFS client failed to support a large number of features required by Windows applications such as Unicode character sets, 64-bit file sizes, 64-bit kernels, the WNet API, volume information queries, security information queries, quotas, RPC services such as WRKSVC and SRVSVC, reparse points, and more.<br />
<br />
The long term goal for the OpenAFS client for Microsoft Windows was not simply a file system that did not rely on the Microsoft SMB redirector and a loopback adapter. The goal was to produce a best in class file system that integrated AFS into the Microsoft Windows experience. Peter and I concluded that we should start over and design an architecture that could support all of the functionality that I desired for OpenAFS and meet some very aggressive performance goals. <br />
<br />
Peter had developed a full redirector file system called KDFS which he used for the development of custom file systems for Kernel Drivers clients. Peter agreed to license the code under a BSD style license to OpenAFS. This permitted us to use KDFS as a starting point. On April 21, 2007 we began coding.<br />
<br />
We designed an
architecture that would not only permit use of a native redirector on Windows XP
SP2 through current and future Windows releases but provide a low-risk transition
strategy for individuals and organizations to use when migrating from SMB to
redirector based interfaces. One of the key decisions was to maintain both the SMB and IFS interfaces as peers and require that all application visible functionality be implemented in both. This approach permitted all new functionality to be deployed to end users as updates to the existing 1.5 release series. Major functional improvements that were shipped prior to the 1.7.1 included:<br />
<ul>
<li>Unicode (UTF-8) encoded file names [1.5.50]</li>
<li>Interface independent Path Ioctl processing [1.5.50]</li>
<li>Pipe Service RPC emulation for wkssvc and srvsvc [1.5.62]</li>
</ul>
In addition, literally hundreds of bugs in the cache manager were uncovered and corrected as part of the isolation of the SMB server from the generic AFS cache management layer. All of these improvements were released as the work was completed providing the end user community immediate benefits and a guarantee that when the IFS interface did ship the cache manager would be unchanged. <br />
<br />
The selected architecture permits a single afsd_service.exe to be used either
in conjunction with an AFS Redirector driver (afsredir.sys) or with the AFS SMB
Server that has been in use for the last fifteen years. When the AFS
Redirector driver is present and active on the system, the SMB Server is
disabled. If the driver is not active, the SMB Server is automatically
started. In addition to the afsredir.sys driver there is one other new
component, the AFSRDFSProvider.dll which comes in both 64-bit and 32-bit
flavors. This Network Provider permits the Explorer Shell to browse
\\AFS and its cells under the "Network" object as its
own category "OpenAFS Network". To switch back and forth between the
SMB-mode and the AFS-Redirector-mode, all that needs to be done is to disable
the AFSRedirector driver in the registry.<br />
<br />
In general the application behavior when using the AFS Redirector interface
should be the same as the AFS SMB Server. However, there are some
differences:<br />
<ul>
<li>The AFS Redirector interface publishes AFS mount points and symlinks as file system reparse points using a Microsoft assigned
OpenAFS reparse tag.
<ul>
<li>Applications that are reparse point aware may no longer cross the
reparse point without explicit direction.</li>
<li>Applications that are reparse point aware but not OpenAFS tag aware
will not understand what to do with the reparse point data. Ask
vendors to contact <a href="mailto:openafs-gatekeepers@openafs.org">
openafs-gatekeepers@openafs.org</a> to learn how to make their
applications OpenAFS aware.</li>
</ul>
</li>
<li>Drive mappings to UNC paths that were made using the SMB interface will
not be accessible via the AFS Redirector interface until they are removed
and recreated. This is because Windows assigns a drive mapping to a
particular file system driver. When the SMB interface was used, the network in use was "Microsoft Windows Network". When the AFS Redirector interface is active, the network is "OpenAFS Network".</li>
<li>Drive mappings made with the SMB Redirector were not considered to be
available when the target path could not be resolved due to either no
network access or lack of appropriate authentication credentials. The
AFS Redirector does not disable a drive mapping due to lack of network access or necessary permissions.</li>
<li>The AFS Redirector does not require the presence of the Microsoft
Loopback Adapter. When the AFS Redirector is in use, the loopback
adapter is ignored. There are no delays in accessing the
\\AFS name space after a suspend or reboot.</li>
<li>Applications that report the speed of file copies will report the speed
of writing to the Windows cache, not the time writing to the AFS file server.
This is because the AFS Redirector does not require synchronous writes to
the file server for each write by the application. The behavior is
closer to that of the Unix cache manager where data is written to the file
server only when the Windows cache manager (not to be confused with the AFS
cache manager on Windows) flushes dirty extents to the backing store.</li>
<li>Due to the existence of the new Network Provider DLL, it is extremely
important that the 64-bit WOW MSI be installed on 64-bit systems.
Otherwise, 32-bit applications will not be able to open files in
\\AFS when using UNC paths.</li>
<li>There is no support for Offline Folders when using the AFS redirector interface. This is because Offline Folders is a feature of the SMB redirector and not a generic capability layered above arbitrary network file systems. </li>
<li>Drive letter substitutions (SUBST D: \\UNC\path) to \\AFS paths will appear as a disconnected network file system when SMB is used but will be connected when the AFS redirector is active.</li>
<li> When the \\AFS name space is viewed via the SMB redirector the directory pointed to by the share name is assumed to be the root directory of the entire name space regardless of how many AFS mount points are crossed. When the AFS redirector is used, every AFS volume is recognized by Windows as a separate file system.</li>
</ul>
On the whole, the behavioral changes when switching from SMB to AFS redirector favor the new implementation. This is especially true when the performance improvements are taken into account.<br />
<br />
There are a number of subtle design decisions that are worth discussing.<br />
<br />
One of the benefits of the SMB only OpenAFS service is that it ran entirely as a user-space service that could be stopped at any time, be replaced with new binaries, and restarted. Microsoft Windows file system drivers once loaded cannot be unloaded. In order to permit upgrades to the afsd_service.exe and kernel driver to be applied without a reboot Peter and I decided to implement the afsredir.sys driver as a framework only driver which in turn loads a kernel library driver, afsredirlib.sys that contains the vast majority of the AFS specific implementation details. When the OpenAFS Service is stopped, the afsredirlib.sys library is unloaded by afsredir.sys and all operations on \\AFS file objects are suspended until the OpenAFS Service is restarted. This permits upgrades to be performed on live systems with active applications.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-evR6pgfSDDs/TnWKmX2HHzI/AAAAAAAAACg/-KSKcIxe8sY/s1600/afsredir-arch.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="175" src="http://1.bp.blogspot.com/-evR6pgfSDDs/TnWKmX2HHzI/AAAAAAAAACg/-KSKcIxe8sY/s320/afsredir-arch.png" width="320" /></a></div>
The major benefit of AFS redirector architecture is an improvement in data throughput between the OpenAFS Service and the AFS redirector. Both the service and the kernel driver share access to the memory mapped AFS cache file. As a result, instead of sending data in-band within a FetchData or StoreData ioctl, the service and redirector simply exchange ownership over file extents within the cache. This avoids a large number of data copies and reduces the cpu cost of each ioctl. With this model in place reads from AFS cache of nearly 800MB/second have been observed. This is approximately 12 times the best performance ever observed with the SMB interface.<br />
<br />
The AFS redirector has a sophisticated Authentication Group implementation. For those that are unaware, the UNIX AFS client implements Process Authentication Groups (PAGs). A PAG is a collection of processes that share a common set of network credentials. A process inherits PAG membership from its parent process but can choose to remove itself from the PAG or create itself a unique PAG. This permits different processes running as local root to execute with different sets of network credentials.<br />
<br />
For Microsoft Windows where a Thread object is just as prime as a Process object the Authentication Group model has been extended to permit processes to belong to more than one authentication group at a time. Each process has one default authentication group active at a given time and each thread can select its own active group or use the process default group. This approach permits applications such as IIS to create a unique authentication group for each remote identity and activate that authentication group for each thread handling a request on behalf of that identity. When a new process is created it only inherits the one authentication group that was active.<br />
<br />
Authentication groups are tracked as part of the Windows DACL in the Process or Thread Token. When a process or thread performs a Local Procedure Call to a background service these tokens permit the background service to impersonate the caller. When impersonation is active, the background service inherits not only the Windows SID of the calling process but also the active authentication group. This ensures that LPCs execute with exactly the AFS permissions of the calling process.<br />
<br />
Microsoft Windows supports multiple subsystems. The most well known is the Win32 subsystem. When NT was originally shipped there were also OS/2 and Posix subsystems. On 64-bit Windows in addition to Win32 is the Wow64 subsystem which provides the 32-bit application compatibility layer. The AFS redirector tracks which subsystem is in use and can use the active subsystem to select which @sys search list should be used. A separate list is maintained for each subsystem.<br />
<br />
The first official OpenAFS.org release to include the new AFS redirector was 1.7.1 published on September 15, 2011. Seven and a half years after the initial proposal and 1608 days after Peter and I began the current implementation. The <a href="https://secure.wikimedia.org/wikipedia/en/wiki/COCOMO">Basic COCOMO</a> model (with coeffcients a=2.4 and b=105) estimates the cost of implementing the AFS redirector and the changes to the OpenAFS Service at approximately US$1.2 million. It can be honestly said that this project would never have been completed if it weren't for the fact that Peter Scott and I were willing to work unpaid for long stretches of time while we searched for additional funding to bring the project to completion.<br />
<br />
The release of 1.7.1 does not mean that the project is complete. There are still many features that I want to see implemented. Here is a partial list:<br />
<ul>
<li> The Windows File System Volume Query Quota Interface is not implemented. As a result, AFS quota information is not available to application processes or end users via Windows dialogs.</li>
<li>The Windows Volume Shadow Copy Service is not implemented. As a result, AFS backup volumes are not accessible via the Explorer Shell.</li>
<li>There is no support for storing DOS attributes such as Hidden, System, or Archive.</li>
<li>There is no support for Alternate Data Streams as required by Windows User Account Control to store Zone Identity data.</li>
<li>There is no support for Extended Attributes.</li>
<li>There is no support for Access Based Enumeration.</li>
<li>There is no support for Windows Management Instrumentation.</li>
<li>There is no support for Distributed Link Tracking and Object Identifiers.</li>
<li>There is no support for storing Windows Access Control Lists. Only the AFS ACLs are enforced.</li>
<li>There is no support for offline folders or disconnected operations.</li>
<li>There is no Management Console for the OpenAFS Service </li>
</ul>
The funding for the AFS redirector came a handful of organizations. Now that OpenAFS 1.7.1 is available I request that any organization that relies on the use of the OpenAFS client on Microsoft Windows contribute US$20 per copy to cover unfunded expenses and future development.<br />
<br />
To end on another positive note, the OpenAFS 1.7.1 release has been tested on the Microsoft Windows 8 Developer Preview and it runs flawlessly. Now all we need are some nice Metro applications to take advantage of \\AFS.Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com2tag:blogger.com,1999:blog-3333505129375354922.post-9357329120985050982009-09-06T22:23:00.003-04:002009-09-07T00:20:44.946-04:00When the impossible happens, reconsider the assumptions of what is possibleEver since Secure Endpoints started receiving OpenAFS for Windows crash reports from Microsoft there have been a small number of reports each month in applications that load libafsauthent.dll (afscreds.exe, netidmgr.exe, ...) and others that perform afs pioctls. It has been the rare case that a mindump has been available. The dumps that have been provided have made no sense. Its been clear that the stack or heap has been overwritten but other than that there has not been enough data to provide a clue where to start looking. <br /><br />Last week OpenAFS 1.5.62 was released. It was an important release that fixed a long standing data corruption error. Something I have been trying to find for more than a year. Combine it with the support for WKSSVC and SRVSVC services providing vastly improved share name enumeration and Windows 7 compatibility and 1.5.62 was a release that I wanted everyone to upgrade to. Unfortunately, the release proved to have two downsides that did not come out during testing. First, Cygwin applications could not access /afs. Second, roaming profiles in some environments failed to work. The Cygwin compatibility problem was traced to the addition of (supposedly mandatory) extended responses to NTCreateAndX requests. The roaming profiles issue was caused by previously unseen requests to open directories as "Directory::$DATA" instead of "Directory".<br /><br />Given the importance of the 1.5.62 release and the show stopper nature of the two issues that had been introduced with it, I spent a good portion of this Labor Day weekend testing it. Lo and behold, during testing Network Identity Manager crashed in the Visual Studio 8 CRT memcpy(). The crash signature looked similar to many I have seen in the past but this time I had access to not just the stack trace but the entire memory image to examine in a live debugger. Not surprisingly, the state of the process made no sense. It was unclear if the stack had been damaged. Could the data be real? The memcpy() was attempting to read data out of a buffer populated by a pioctl(). The buffer size is 16KB. The data that should have be returned should not have been more than a few hundred bytes. Yet, the memcpy() was attempting to read beyond the end of the buffer. Examining the contents of the buffer closely showed that the data in the buffer did not match the request. Instead of the buffer containing a GetToken response it contained a WhichCell response. Parse the string "Freelance.Local.Root" as if it were a marshalled token and all hell breaks loose.<br /><br />Two questions came to mind. First, why is there no data validation of the data received via the pioctl()? Second, how in the world did the wrong response end up being received in the first place? The lack of data validation although completely wrong is not all that surprising. This source code has not been modified since the original IBM contribution. It wasn't causing any problems and therefore didn't attract attention. The response confusion was surprising.<br /><br />The OpenAFS pioctl() interface on Microsoft Windows works by implementing a <span style="font-weight: bold;">transceive</span> (an atomic write request / read response) operation using CreateFile(), WriteFile(), ReadFile(), CloseFile(). The OpenAFS SMB server treats a NTCreateAndX operation on the magic file name "_._AFS_IOCTL_._" as the trigger to indicate that a pioctl() is being performed. Each time the file is opened a new smb file identifier is allocated. The caller writes the pioctl request to the file and then when the first read is issued, the requested operation is performed and the response data is queued up and sent in response. The caller issues ReadFile calls until end of file is reached and then the file is closed. Given this model, how is it that the response could possibly get confused?<br /><br />My first theory was that a bug in the OpenAFS SMB server was issuing the same file id to two requestors. After close examination of the code it turns out that due to a thread safety issue there was a race that could result in that scenario. After fixing the race, I attempted to prove that the race was the cause of the problem. I kicked off five scripts executing a different pioctl operation 100,000 times. The client side bug was obviously being triggered but there was no evidence that the race I discovered had anything to do with it. Especially considering the fact that the problem continued to occur after the fix to prevent the race was installed.<br /><br />The next step was to examine the behavior of the five scripts using Sysinternal's Process Monitor while filtering on all access to paths beginning with "\\afs". The output was quite revealing. It showed that requests and their responses based solely upon the length of the response were mismatched. Some ReadFile() operations failed with end of file errors on the first read. <br /><br />At this point it was time to start examining the trace output of afsd_service. What I discovered was that the smb_IoctlPrepareWrite() and smb_IoctlPrepareRead() functions were being called multiple times on the same smb file id. The theory that the same pioctl instance was being used for requests from multiple processes proved to be correct. The question remained, why was it happening? Further examination of the trace output showed something even more curious. A large number of NTCreateAndX calls were missing from the output. I expected to see one NTCreateAndX operation for each pioctl request. In fact, that was a basic assumption that the original author of the pioctl interface must have assumed was true. Too bad for all of us that it isn't.<br /><br />As it turns out the Microsoft SMB redirector chooses to avoid multiple NTCreateAndX calls for a file if all of the active requests have the same security privileges and request the same access modes. Instead, the SMB redirector manages the various open/close operations locally and only closes the file after it has been idle. The CreateFile operations were issued with FILE_SHARE_READ|FILE_SHARE_WRITE share mode. This permitted multiple apps to open the file simultaneously and perform writes and reads. If two processes open the file and write a request before the first process reads its response, the first process will receive the response meant for the second process and the second process will receive an end of file error. One solution is to remove the FILE_SHARE_WRITE in order to ensure that only one process can open the pioctl file at a time.<br /><br />It is now possible to run the five simultaneous pioctl performing scripts without a single error. Even so, data validation checks have been added to libafsauthent.dll to prevent invalid input from crashing applications in the future. I'm now looking forward to the 1.5.63 release and examining the Windows Error Reporting logs in a couple of months to confirm that the random crashes are no longer being reported.Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-71632720684445798742009-02-23T14:17:00.004-05:002009-02-23T14:53:59.088-05:00<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_r5qE1HPAWUo/SaL3zCh14KI/AAAAAAAAAAw/gjqt7qn4GCE/s1600-h/nim-v2-custom-icons.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 320px; height: 313px;" src="http://2.bp.blogspot.com/_r5qE1HPAWUo/SaL3zCh14KI/AAAAAAAAAAw/gjqt7qn4GCE/s320/nim-v2-custom-icons.png" alt="" id="BLOGGER_PHOTO_ID_5306075767220068514" border="0" /></a><br />Its been nearly two years since the release of Network Identity Manager 1.3 as part of MIT Kerberos for Windows. Network Identity Manager is preparing to breakout on its own with version 2.0.<br /><br />With version 2.0 the door is opened for identities based upon authentication technologies other than Kerberos v5. Whereas version 1.x is limited to providing a single sign-on experience when the initial authentication is performed with a Kerberos v5 principal name and password, version 2 permits KeyStore and Certificate initial authentication identities to be implemented. A KeyStore authentication can be used to automatically obtain Kerberos v5 ticket granting tickets for multiple Kerberos v5 identities. Each identity in turn can be used to obtain its own derived credentials such as AFS tokens, Kerberized Certificate Authority issued short lifetime X.509 client certificates, or various forms of web authentication credentials. Certificate based identites might be used with Public Key Initial Authentication for Kerberos (PKINIT) or the Globus Global Security Infrastructure. <br /><br />Version 2 also improves the end user experience with:<br /><ul><li> a new identity creation wizard</li><li>progress dialogs</li><li>a streamlined and less error prone mechanism for obtaining new credentials</li><li>an updated credential display that is cleaner, less confusing, and more informative</li></ul>For additional information on the upcoming Network Identity Manager version 2 see:<br /> <a href="http://www.secure-endpoints.com/netidmgr/roadmap.html">http://www.secure-endpoints.com/netidmgr/roadmap.html</a>Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-26601410323293015212008-08-02T17:38:00.000-04:002008-10-08T20:43:57.741-04:00OpenAFS for Windows with Unicode is AvailableA couple of weeks ago OpenAFS for Windows with Unicode path name support was released. I thought this was going to be a big deal. Due to the lack of Unicode support there were all sorts of problems for organizations that wanted to use roaming profiles and redirected folders. Even more important is the fact that the vast majority of the world does not limit their writing to the characters represented in Windows OEM Code Pages 437 and 850. For years these individuals could not save their data into AFS using the language of their choice. <br/><br/>Up to this point, 1.5.5x has had one of the slowest adoption rates of any OpenAFS for Windows release over the last five years. Is this because it is Summer? Is it because most users are Americans and they do not require Unicode? Is it because everyone has given up on AFS? I don't know. <br/><br/>What I do know is that the Unicode version has been downloaded (in small numbers) by a broad range of top-level domains other than the United States including Malaysia, Russia, Canada, Germany, Taiwan, Brazil, Hong Kong, Poland, Yugoslavia, Croatia, Japan, and Indonesia. Hopefully, users from these countries will write in to describe how Unicode support has made their lives easier.<br/><br/><br/><div class="tags" id="tagsLocation"><br/>Tags: <a rel="tag" target="_blank" href="http://technorati.com/tag/openafs+unicode">openafs unicode</a></div>Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-67492816814552781722008-05-14T12:21:00.000-04:002008-10-08T20:43:57.741-04:00File System Internationalization sucksInternationalization in file systems really sucks. There are two perspectives
in the world. First, there are the POSIX proponents who believe that names are simply nul
terminated octet sequences that have no meaning except to the application that
created them. Second, there are those who believe that names are should be portable between systems and therefore should all be encoded in a common character set. Lets call these second group of folks the UNICODE camp. <br/><br/>I fall into the UNICODE camp. This is most likely a side effect of having spent nearly fifteen years of my life working on Kermit, an application and file transfer protocol designed specifically to move files (by name) between computer systems using different architectures and locales. I learned very early on that if you followed the POSIX approach the end result when a file is copied from an EBCDIC system to an ASCII system or a Latin-1 system to a CP437 system is gibberish. Not only for human beings but for the applications as well.<br/><br/>A globally accessible file system such as AFS is in many regards similar to Kermit except that instead of copying files into a local file system from a remote system, the AFS client makes the entire remote file system accessible to the local machine. The exact same character set conversion issues occur. As long as all of the file names are in the same character set all is dandy and applications on one machine can access files created on another machine.<br/><br/>But what happens when the character sets are different? In that circumstance, the names become gibberish to humans and applications. In a worst case scenario, the file name as stored in the directory cannot even be represented on the local machine because the file name contains illegal code points according to the rules of the local environment. <br/><br/>This situation doesn't happen as frequently as it could because still most of the world is only storing US-ASCII or ISO-Latin-1 into the file system. However, even with those restrictions there are still problems. For example, the following characters are illegal on Windows systems<br/><br/> " / \ * ? < > | : <br/><br/>It doesn't matter what the underlying file system is. If those characters are in the name, the name is illegal. Any name with those characters will not be included in the directory listing.<br/>This in turn means it is impossible to see the file, access the file, rename the file, delete the file, or delete the directory the file is located in. File systems that include objects with such names must perform name translation in order for the Windows users or applications to be able to manipulate them.<br/><br/>With the introduction of Unicode another set of complications are introduced. Unicode provides for multiple semantically equivalent encodings of the same string based upon whether composed or decomposed sequences are used. For historical reasons, MacOS X stores its file names using UTF-8 encoding of decomposed Unicode sequences, Microsoft Windows stores composed Unicode sequences, Linux also stores composed sequences, and all of the sequences for a given string can be different. That means that a user who types the same string on all three platforms will obtain a different octet sequence for each platform. So much for interoperability. <br/><br/>The POSIX supporters make the claim that names must be treated as octet strings because the locale between two different processes on the same machine can be different. All that tells me is that POSIX allows users to shoot themselves in the foot. It doesn't mean it is right. Of course, the POSIX folks do have a point. If a UNIX system is incapable of communicating the character set that is being used to the file system, how is the file system supposed to do something sane with it to provide for interoperability between heterogeneous environments.<br/><br/>Microsoft Windows has an advantage here in that there is a standard character set for the entire operating system and all file systems: Unicode. As a result a file system client on Windows can at least ensure that Unicode names are normalized on output, that directory entry names are normalized for display and lookup, that all illegal characters are mapped to something legal, and ensure that all strings communicated with the file server are the original directory entry names and not the normalized names used locally. This is the approach that will be taken as Unicode is added to the OpenAFS for Windows client.<br/><br/> <div class="tags" id="tagsLocation"><br/>Tags: <a rel="tag" target="_blank" href="http://technorati.com/tag/afs">afs</a>, <a rel="tag" target="_blank" href="http://technorati.com/tag/unicode">unicode</a>, <a rel="tag" target="_blank" href="http://technorati.com/tag/internationalization">internationalization</a>, <a rel="tag" target="_blank" href="http://technorati.com/tag/i18n">i18n</a></div>Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-56633951002185060412008-03-12T20:39:00.000-04:002008-10-08T20:43:57.742-04:00OpenAFS joins Google Summer of Code 2008Today OpenAFS submitted an application to take part in the 2008 Google Summer of Code. OpenAFS project ideas are listed at <a href="http://www.openafs.org/gsoc.html">http://www.openafs.org/gsoc.html</a>.<br/><br/>Thanks to Asanka Herath, Matt Benjamin, Simon Wilkinson and Derrick Brashear for volunteering to be mentors to the next generation of OpenAFS developers.<br/><br/>Update: Monday 17 March 2008, OpenAFS was accepted.<br/><br/><div class="tags" id="tagsLocation"><br/>Tags: <a rel="tag" target="_blank" href="http://technorati.com/tag/openafs+google+summer+of+code+gsoc">openafs google summer of code gsoc</a></div>Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-13859440311816420122008-03-04T22:59:00.000-05:002008-10-08T20:43:57.742-04:00OpenAFS vs Norton Internet Security 2008OpenAFS requires several rules to be set in order to work with Norton Internet Security 2008.<br/><br/>1. Under "Personal Firewall->Program Control" add a "Allow" rule for "C:\Program Files\OpenAFS\Client\Program\afsd_service.exe" <br/>2. Do the same for "fs.exe", "aklog.exe", and other command line utilities if so desired.<br/>3. Under "Personal->Firewall->Trust Control, Trusted tab", add a "Trusted" rule for "02-00-4C-4F-4F-50".<br/>4. Under the "Personal Filewall->Advanced Settings" press the "Configure" button.<br/>5. Add a new rule:<br/> "Allow", "Inbound", "Any computer", "Protocol: UDP", "Port 7001", and describe it as "AFS Callback Port". Make it the first rule in the list.<br/>6. Add a new rule:<br/> "Allow", "Outbound", "Any computer", "Protocol: UDP", "Port range: 7001-7008" and describe it as "AFS Server Ports". Make it the second rule in the list.<br/><br/>Finally, double check the configuration of the "Microsoft Loopback Adapter" labeled "AFS" in the Network Control Panel. Make sure that "TCP/IP is checked", that "Client for Microsoft Networking" is checked, and that "File and Printer Sharing" is not checked.<br/><br/>You should now be able to access "\\afs\all" in the Explorer Shell.<br/><br/><br/><br/><br/>Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-30994137347902636122008-03-02T10:46:00.000-05:002008-10-08T20:43:57.742-04:00I want my OpenAFS Windows client to be fastThere are a number of configuration knobs available to tune the OpenAFS for Windows client. The most important related to throughput fall into two categories: <br/><br/><u>How much data can I cache?</u><br/>CacheSize<br/>Stats<br/><br/><u>How Fast Can I Read and Write?</u><br/>BlockSize<br/>ChunkSize<br/>EnableSMBAsyncStore<br/>SMBAsyncStoreSize<br/>RxMaxMTU<br/>SecurityLevel<br/>TraceOption<br/><br/>All of these options are described in Appendix A of the <a href="http://www.secure-endpoints.com/oafw/">Release Notes</a>. Here are the values I use:<br/><br/>CacheSize = 60GB (64-bit) 1GB (32-bit)<br/>
Stats = 120,000 (64-bit) 30,000 (32-bit)<br/><br/>BlockSize = 4<br/>
ChunkSize = 21 (2MB)<br/>
EnableSMBAsyncStore = 1<br/>
SMBAsyncStoreSize = 262144 (but would use 1MB if I didn't use cellular networks as often)<br/>
RxMaxMTU = 9000<br/>
SecurityLevel = 1 (when I need speed I use "fs setcrypt" to adjust on the fly)<br/>
TraceOption = 0 (no logging)<br/><br/><br/>Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-8623391961112723262008-03-01T17:09:00.002-05:002009-02-23T15:20:12.830-05:00Problems Discovered when Profiling the OpenAFS Windows clientI have spent the last month analyzing the performance of the <a href="http://www.secure-endpoints.com/openafs-windows.html">OpenAFS for Windows</a> cache manager using the <a href="http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx">Sysinternal's Process Monitor'</a>s Profiling toolset. The results were quite eye opening. What I had believed was a highly parallelized code set instead was filled with bottlenecks that seriously hampered the ability to process data at high rates. What follows are some of the most significant issues that were uncovered. Some of the issues are specific to AFS, others are likely to be problems found in many other applications.<br /><br /><u>Reference Counts</u><br />Each of the objects maintained by the cache manager (data buffers, vnodes, cells, smb file handles, directory searches, users, etc) are reference counted in order to determine when they should be garbage collected or can be recycled. Reference counts must be incremented and decremented in a thread safe manner. Otherwise races between the threads when they update the reference count will result in the count becoming inconsistent. Objects will either be freed prematurely (undercounts) or never become available for recycling (overcount). Reference counts were therefore protected by the same read/write locks that protect the hash tables used to find enumerate the objects. The problem is that although a read lock can be used to safely traverse a hash table's link list, a write lock is required to safely update the reference count of the desired object once it is located. As a result, only one thread can be searching for objects or releasing them at a time.<br /><br />If it were possible to adjust the reference count values in an atomic operation most of the hash table transactions that required write locks could use read locks instead. As it turns out, Windows supports Interlocked increment and decrement operations for aligned 32-bit and 64-bit values. By making use of the Interlocked operations reference counts are safely adjusted and parallel access hash table contents is permitted.<br /><br /><u>Network Reads</u><br />The AFS servers support Rx hot threads. As soon as a message is received by the listener thread, another thread is woken to listen for the next incoming message while the current thread becomes a worker to process the message. The AFS clients did not support Rx hot threads and therefore could only process a single incoming message at a time. By activating Rx hot threads in the AFS client the latency between received messages was significantly reduced.<br /><br /><u>Lock Churn</u><br />Throughout many code paths the same lock or mutex would often be released and re-obtained. Doing so increases the possibility that the current thread will be swapped out and an alternate thread activated. These context switches between threads are expensive and increase the overall clock time required to respond to a request. By refactoring the code it was possible to avoid many such transitions thereby improving overall performance.<br /><br /><u>Write-to-Read and Read-to-Write Lock Transitions</u><br />Similar to the previous case, there are many situations in when it is desirable to either downgrade or upgrade a read-write lock. Write-to-Read transitions are always safe to perform and can be done without forcing a context switch between threads in all cases. Read-to-Write transitions can be done without a context switch whenever the requesting thread is the only reader. Regardless of how often it is the case, a read-to-write transition will be cheaper than dropping the read lock and requesting a write-lock.<br /><br /><u>Equality comparisons must be cheap</u><br />The function used to determine if two File Identifiers are the same is one of the most frequently called functions. It is used every time a vnode or buffer must be located. As a result it must be fast. Instead of comparing each of the elements of a FID, the structure was extended with a hash value that can eliminate the vast majority of false matches with a single comparison. In addition, the function was inlined to avoid the function call overhead.<br /><br /><u>Do Not Perform Unnecessary Work</u><br />The AFS client has an extensive logging infrastructure which is disabled by default. However, it turns out that although the actual logging was disabled a majority of the work that is required to construct the log messages continued to be performed. This unnecessary work was a significant drain on resources and increased clock time for all operations.<br /><br /><u>Do Not Perform Unnecessary Work - Part II</u><br />When copying a file on top of an existing file, the first operation that is performed is to truncate the file. This results in the invalidation of all the cached data buffers associated with the file. The actual truncation is not sent to the file server until the first write completes which is not attempted until the first chunk size of data is ready to be sent. As a result, when the initial data buffers are being written to in the cache the cache manager believed that it must read their contents from the file server. If the pre-fetch criteria are met, additional data buffers would be queued as well. Performing these reads is useless work given the fact that the client will overwrite them or discard them once the truncation is sent to the file server. The answer of course was to check for the outstanding truncation when getting data buffers.<br /><br /><u>Do Not Perform Unnecessary Work - Part III</u><br />Acquiring mutexes and locks are expensive because they often result in the active thread giving up the rest of its allocated time slice and being forced to be rescheduled at a later time. Therefore, if there are locks that are not required to perform the current operation, they should not be acquired.<br /><br /><u>Do Not Sleep if it is Not Required</u><br />If the file server responds EAGAIN to an RPC, the cache manager will under most circumstances put the current thread to sleep and try again in a few seconds provided that the SMB redirector timeout limit has not been reached. There are several operations for which retries are not permitted which include background writes, lock acquisition, etc. Due to an architectural design flaw, the cache manager was putting threads to sleep even if retries were not permitted.<br /><br /><u>Setting Max MTU Size hurts<br /></u><u></u>Back in 2003 it was discovered that the IPSec VPN products did a very poor job on interacting with AFS due to the reduction in the actual data payload in a UDP packet caused by the addition of the IPSec headers. Due to an ever increasing number of complaints to Help Desks and to OpenAFS stating that AFS didn't work it was decided that the OpenAFS installation packages on Windows would ship with the RxMaxMTU value set to 1260. At the time the performance of the cache manager was so bad that it was not possible to notice the difference. Unfortunately, now that the cache manager is better performing, setting RxMaxMTU to 1260 can result in a reduction in StoreData throughput of 50%<br /><br /><u>Avoid Modifying Objects Whenever Possible</u><br />Every vnode and every data buffer object contains a version number. Every time the vnode changes the file server increments the version number. Doing so automatically invalidates the contents of all caches forcing the clients to re-read the data from the file server. Reading from the file server is an expensive operation so we try to avoid it when we know that the current contents of the cache are already valid. We know that to be true when the cache manager performed the most recent change to the vnode and the version delta is one. Over the summer code was added that would bump the version number on all of the data buffers in this circumstance. However, this had the side effect that writes became slower as the file got larger. By maintaining a range of valid data versions instead of just the current data version, it is possible to maintain the benefits of the existing cached data at a cost that is independent of the file size.<br /><br /><u>Hash Algorithms Matter</u><br />The lock package uses an array of critical section objects to protect the internals of the mutex and read/write locks. Which critical section was used for which lock object was determined by hashing the memory address at which the lock was located. Unfortunately, the distribution of the objects was poor and some critical sections were used much more frequently than others. Worse was the fact that several highly used global locks shared the same critical sections.Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-16542681758480180242007-09-26T09:44:00.000-04:002008-10-08T20:43:57.743-04:00Windows Error Reporting versus Open Source Development<a href="http://https://winqual.microsoft.com/wer/">Windows Error Reporting</a> is one of the greatest services that Microsoft has ever provided to developers of applications and device drivers for Microsoft Windows operating systems. It provides a registered and verified software developer with access to crash report data that for that developer's applications.<br/><br/><u>How does it work?</u><br/>When an application terminates unexpectedly or a user terminates an application due to a lack of responsiveness, Windows will capture a mini-dump of the application, the version information of all loaded modules, and the version information for the Windows operating system on which it is being run. The user is then presented a dialog requesting permission to deliver this information to Microsoft. <br/><br/>Registered application developers provide Microsoft with a mapping file that describes each binary in a product release including version info, link times, and other traits that can be used to uniquely identity the module. When crash reports are received by Microsoft, the WER servers compare each report against the mapped modules. When a match occurs, a WER event is generated and the application developer is notified. <br/><br/>One of the really nice benefits of WER is that it can sort the events into buckets based upon the type of crash, hang, and process state at the time of the crash. If the same type of crash occurs 50 times, all of the matching events will be placed into the same event bucket. Application developers can easily compare the state of all of the crash reports to assist in tracking down the cause.<br/><br/>When a fix is available, the application developer can register a response which will be delivered to subsequent users that experience the same type of crash with the same version of the module or application. These responses can indicate that the software is not supported on the OS version that it is installed on, or that a new version is available, or that a workaround can be found be reading a provided web page. <br/><br/>This mechanism benefits both the developers and the end users because as soon as a bug is found it can be fixed without requiring that the end users go through a long process of reporting a crash to the developers directly and being unable to provide enough technical detail for the developers to fix it. Once the fix is available, end users are automatically notified. Less frustration for end users and for developers. Everyone wins.<br/><br/>Unless you are an open source developer or end user....<br/><u><br/>What is the problem with Open Source?</u><br/>Secure Endpoints is an open source vendor. We distribute pre-built installers for Kerberos for Windows and OpenAFS for Windows. For each of these distributions we have binaries and matching symbol data. When a crash report arrives from WER, the mini-dump is loaded into a debugger along with the matching binaries and symbol data. Without the binaries or the symbols, the mini-dump information is much less useful before the stack addresses cannot be matched up with specific functions in the application modules.<br/>As long as the version of the application that is installed is the one Secure Endpoints built, we can make use of the crash reports to identify problems, fix them and notify end users via the WER response mechanism. <br/><br/>What happens when an organization decides to build the product from the published source code instead of using the pre-built binaries? In that case, WER matches the module names and file version information and places an event into a crash bucket. Secure Endpoints downloads the crash report, loads it into the debugger only to find that we have neither matching binaries nor matching symbols. The end result is that the WER report is useless. The best I can do is file a response to the end user recommending the use of the pre-built binaries.<br/><br/>I can certainly understand why organizations wish to build their own binaries. In most cases its because they want to be able to debug problems they experience in-house. For that they need matching symbols files. This is exactly the reason why both the Kerberos for Windows and OpenAFS for Windows distributions include the symbol files from the official build. This way organizations have all the necessary pieces: binaries, symbols files and source code. Organizations that identify problems internally should file bug reports to the open source maintainers so that fixes can be developed and incorporated into future releases.<br/><br/><br/><br/><div class="tags" id="tagsLocation"><br/>Tags: <a rel="tag" target="_blank" href="http://technorati.com/tag/WER+openafs+oafw+kfw+windows+error+reporting">WER openafs oafw kfw windows error reporting</a></div>Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-88289002436632326132007-02-24T10:21:00.001-05:002009-02-23T15:21:37.685-05:00Squeaky wheels receive attention (both good and bad)I spent the past few weeks traveling the country meeting with organizations that use OpenAFS and Kerberos for Windows. I heard a number of really wonderful things:<br /><ul><li>"We haven't had a show stopper event in more than a year"</li><li>"The performance is so much better than it used to be. We no longer receive complaints about how slow it is instead our users send us messages like this one, 'OH My gosh, afs is so fast now since i got my upgrade :)'"</li></ul>At the same time the amount of funding spent on support and new development has been decreasing. Budgets are always tight and management wants to spend its money on addressing the issues that cause on-going problems. <br /><br />Just a couple of years ago, the OpenAFS Windows client was so bad that not only were organizations sending money but individuals would send personal paypal payments and bottles of tequila as a "thank you for improving my life". These days expectations have changed. The assumption is that the OpenAFS Windows client just works.<br /><br />In the 1.5.15 release of OpenAFS for Windows, a serious data corruption bug was fixed. As it turns out this bug had been reported to IBM within the last year by an organization that was still using the IBM AFS Windows client. When the organization switched to OpenAFS it never occurred to them that OpenAFS would have the same problem given their common heritage. OpenAFS is so much better in so many ways that they "just assumed it had already been fixed."<br /><br />The truth is that all of the low hanging fruit has already been picked. Its not that there is no more work to be done but that all of the remaining work is big. So big in fact that it cannot be paid for out of support budgets. Instead strategic planning funds must be used and those are much harder to come by especially when the scope of the projects is in developer years and hundreds of thousands of U.S. dollars. Its no longer possible for someone to ask "how much would it cost to fix xyz?" and receive a response indicating that the work could be done in a few hours or a day or two.<br /><br />Instead, much of the longer term strategic work that was done to support the Windows Vista platform was unsupported. Secure Endpoints contributed hundreds of hours of developer time to ensure that there would be an OpenAFS client for the new operating system. This was done on the assumption that the costs would be re-couped in the future through interest in support contracts. What a surprise it was to hear this week that existing support contract customers are questioning the need for the support. The long hours spent improving the product have taken OpenAFS off the radar of senior management and as a result the funding is disappearing.<br /><br />One large user described how there have been so few reported issues with the 1.4.2 client that he can't justify upgrading to 1.5.15 even though he is aware of all of the significant improvements in performance and stability. Performance improvements just aren't a reason to upgrade when there are thousands of clients involved. Stability doesn't matter if the end users are not being adversely affected. Sure there are bugs and annoyances but the help desk knows how to address them and the users move on with life. Management simply is not going to spend money on something that is faster or prettier. If there isn't a critical show stopper issue, it won't be detected by their radar.<br /><br />Our philosophy is that software is built to address the needs of its users with the goal of making their lives happier and more productive. Good software doesn't attract unwanted attention. In the case of a file system or other infrastructure, the end user should be able to take it for granted. If it receives attention from the user, that is a bad thing.<br /><br />A good support contract vendor is one that addresses issues promptly when they occur, but more importantly works to ensure that you do not have issues in the first place. The question is, if support dollars are used to fund development that pro actively addresses issues before they are noticed by the customer, how does the customer know that the support dollars were well spent? This is especially true when management does not believe that incremental improvements in performance and stability are worth paying for.<br /><br />I am now beginning to understand the behaviors of large corporations providing support to Federal agencies. I find them extremely frustrating to deal with because the apparent goal is to deploy software with just the right amount of bugs such that there are never issues that bring the entire system to a halt but that ensure that there is a constant stream of small issues that will keep them on the phone with the agency's help desk. Every week a report is sent to the customer detailing the number of issues categorized by severity and whether or not the user's problem could be addressed. Large numbers of low severity issues is encouraged whereas even a single Priority One issue is to be avoided. <br /><br />Fortunately for the clients of Secure Endpoints Inc, I believe that our role is to help prevent problems regardless of the severity. Unfortunately, it is then harder to make the case for additional financial investment in products that are already deemed to be "good enough".Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-83510036307995567212007-01-08T12:19:00.000-05:002008-10-08T20:43:57.744-04:00Happy New Year!It has been many months since this blog has been updated and many wonderful things occurred during the final three months of 2006. <br/><br/>On the Kerberos front:<br/><br/>On Nov 9th, MIT announced that they want to provide a full-time developer to support Windows development. As a result, Secure Endpoints Inc. has become a development and support partner. Secure Endpoints Inc. will continue to enhance Kerberos for Windows and Network Identity Manager as well as issue new releases in conjunction with MIT's Kerberos team. The primary change is that MIT will no longer be funding Secure Endpoints' efforts. As a result, Secure Endpoints is reaching out to the broader Kerberos for Windows user community to help support on-going development. <br/><a target="_top" href="http://www.secure-endpoints.com/kfw/New%20Direction%20for%20Kerberos%20for%20Windows.eml">http://www.secure-endpoints.com/kfw/New%20Direction%20for%20Kerberos%20for%20Windows.eml</a><br/><br/>On Nov 30th, MIT Kerberos for Windows 3.1 including Network Identity Manager 1.1.8 was finally released. <a target="_top" href="http://www.secure-endpoints.com/kfw/Kerberos%20for%20Windows%20version%203.1%20is%20released.eml">http://www.secure-endpoints.com/kfw/Kerberos%20for%20Windows%20version%203.1%20is%20released.eml</a><br/>Although Network Identity Manager has not changed much on the outside since the KFW 3.0 release, on the inside the changes were dramatic. A large number of usability issues were addressed and the plug-in interface was improved to support a wider range of functionlity. KFW 3.1 can be downloaded from MIT: <a target="_top" href="http://web.mit.edu/kerberos/dist/index.html#kfw-3.1">http://web.mit.edu/kerberos/dist/index.html#kfw-3.1</a><br/><br/>Development on KFW 3.2 and NIM 1.2 is underway. Secure Endpoints has posted a development road map including 64-bit Windows support, Vista support, and a wide range of
enhancements to the Network Identity Manager user interface. Financial support from the community is required to sustain the on-going improvements that KFW has received over the last several years.<br/><a target="_top" href="http://www.secure-endpoints.com/netidmgr/roadmap.html">http://www.secure-endpoints.com/netidmgr/roadmap.html</a> <br/><br/>For OpenAFS for Windows, 2006 was a banner year. It started off with the 1.4.1 release candidates and ended with the release of 1.5.13. Throughout those releasesthere were more than 150 improvements to the product. The most important changes include:<br/>* No more resource leaks within the SMB Server<br/>* Locally managed byte range locks backed by full file locks on the file server<br/>* Improved performance when disconnected from the network<br/>* Improved performance for directory listing<br/>* Improved performance when storing temporary files within AFS<br/>* Improved power management event handling<br/>* Support for file sizes greater than 2GB<br/>* Over quota and disk full errors are now reported<br/>* Significantly improved handling of dirty buffers results in decreased cpu utilization and faster writes<br/>* A Network Identity Manager AFS credential plug-in is provided<br/>* Support for 64-bit Windows<br/>* Support for Microsoft Vista<br/>A summary of the current state of OpenAFS for Windows can be found at <a target="_top" href="http://www.secure-endpoints.com/openafs-windows.html">http://www.secure-endpoints.com/openafs-windows.html</a> as well as the most recent Status Report <a target="_top" href="http://www.secure-endpoints.com/talks/OpenAFS-Windows-Dec-2006-Status-Report.pdf">http://www.secure-endpoints.com/talks/OpenAFS-Windows-Dec-2006-Status-Report.pdf</a>. <br/><br/>Secure Endpoints has published a development road map for OpenAFS for Windows which includes a number of performance improvements to the AFS Client Service as well as a complete set of re-writes of the Explorer Shell integration, the OpenAFS Control Panel, and the development of a Microsoft Management Console for configuring the AFS Client Service. <a target="_top" href="http://www.secure-endpoints.com/openafs-windows-roadmap.html">http://www.secure-endpoints.com/openafs-windows-roadmap.html</a><br/><br/>Finally, perhaps the best surprise for last. Just before the end of the year the AFS Servers (file, protection, volume, volume database, bos) were made functional once again. The install wizard has been removed because it made assumptions that no longer hold true, but by manually installing the servers as is done on UNIX, it is now possible to run a cell from a Windows Server. See the road map for a summary of what still remains to be done.<br/><a target="_top" href="http://www.secure-endpoints.com/openafs-windows-roadmap.html#afs%20servers">http://www.secure-endpoints.com/openafs-windows-roadmap.html#afs%20servers</a><br/><br/>In 2007, there is much to look forward to. During the first quarter Secure Endpoints will release a new Network Identity Manager plug-in for obtaining KX509/KCA certificates; and with community support there will be significant releases of both KFW and OpenAFS. <br/><br/>Mark on your calendar that the next AFS & Kerberos Best Practice Workshop will be held at Stanford during the week of May 7 to 11. As always full day tutorials will be provided on AFS and Kerberos installation, administration, and maintenance. This year Secure Endpoints will be providing the Kerberos tutorial. New this year will be discussion of Kerberos and GSS-API programming practices.<br/><br/>Here's a toast to the accomplishments of 2006 and those that are to come in 2007. <br/>Happy New Year!!!!<br/><br/>Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-73471518761056340032006-10-18T09:09:00.001-04:002009-02-23T15:23:18.564-05:00The need to avoid release labeling and choice for end usersDevelopers have a tendancy to focus on source code management. We maintain source code repositories to help us manage the development process. Within the repository we construct release branches. Each branch allows a set of sources to be shaped for a specific purpose. Typical branching strategies include separate branches for the maintenance of a public release, for development of the next release, and experimental branches for risky development that might not work out or may have an adverse impact on other developers. Developers often give somewhat arbitrary names to these branches "stable", "unstable", "maintenance", "development", "project foo", etc. that only have meaning to the developers.<br /><br />As is often the case, the names assigned to the branches have no relationship with the quality of the code on a particular branch. This is especially true for a software project which supports large numbers of operating system platforms. Given the rate of development it may often be true that different branches might be a better choice than others for a given platform.<br /><br />OpenAFS has traditionally labeled its branches as "stable" and "unstable". The even numbered branches are "stable" and the odd numbered branches are "unstable". This has resulted in significant amounts of confusion and frustration for end users. At any given time end users have been presented with up to three current releases:<br /><ul><li>the last <i>final</i> release off of the "stable" branch</li><li>the most recent <i>test</i> release off of the "stable" branch</li><li>the most recent release off of the "unstable" or "development" branch</li></ul>What's an end user to do? More importantly, what's an administrator responsible for choosing the release to distribute throughout their organization to do?<br /><br />When presented with the choice of selecting among "stable", "beta", or "unstable" which do you think the majority of individuals will choose? End users don't want to install software that is going to cause them to lose data and they don't want to be guinea pigs so more often than not they are going to choose the "stable" release. Even if this release has a list of known bugs a milelong and is years old. <br /><br />The distinction between the various source code branches is of meaning only to the developers. End users do not think of software as source code. They think of it as a product and the labels associated with different versions of a product will signfiicantly influence the end user's decisions especially when faced with complex choices they are not qualified distinguish between. It is unrealistic to assume that an end user is going to understand the importance of file locking or the meaning of a 64-bit file size or the terminology surrounding deadlocks and reference count leaks. When a typical end user is presented with a choice among two or three complex options without a strong recommendation specifying which should be used, simplistic labels such as "unstable", "stable", "final", "development", "test", "beta", "candidate", etc. are much more influential than they are intended to be. <br /><br />The reputation of OpenAFS on the Microsoft Windows and MacOS X platforms is suffering in part because of the choices given to end users and the terminology used to describe them. End users want something that works. They want to visit a web site and see that version X.Y.Z is the best version available for their platform and this is what they should be using. When they experience a problem and see that they are not currently running the recommended version, then they will upgrade. If they experience a problem and are presented with choices that they can't make heads nor tails of, they are going to take the path that appears to have the least risk. End users will choose the "stable" or "final" release over something labeled "test", "beta", "unstable", or "development" 9 out of 10 times. Even though the problem they are experiencing might very well be fixed in one of these apparently riskier releases.<br /><br />For Windows users the availability of multiple releases has been a serious problem. The 1.4 series does not contain significant functionality that is meant to protect end users from data loss. This functionality is only available in the 1.5 series. Unfortunately, due to the fact that end users are presented with <b>new</b> releases from both the 1.4 and 1.5 branches as they are released it is truly impossible for end users to know which to use without a very clear recommendation from the gatekeepers and perhaps the broader user community. <br /><br />One of the other significant problems facing OpenAFS versioning is the length of time it takes in order to get through a test cycle. It is often the case that a small number of problems on specific operating system versions or hardware architectures can prevent a test cycle from being completed. In the meantime, the release that should be considered the best choice on all of the other operating system versions and hardware architectures is stuck with a label of "test", "beta", or "candidate" which results in organizations and end users from being willing to install it.<br /><br />As a result I am recommending that OpenAFS (and all other cross-platform open source projects) avoid the use of the one version is best for all platforms mentality. Instead of labeling releases as "stable-1-4-2", "stable-1-4-2-beta-1", "stable-1-4-2-rc3", or "unstable-1-5-9", just use numbers such as"1-4-41", "1-4-42", "1-4-43", "1-5-9". This removes the negative connotations associated with the labels. For each platform a recommended release number can be provided. <br /><br />This new approach provides a number of side benefits. No longer do the developers need to guess at what version numbers to assign to test builds. When preparing for a new release we want the final version number to be X.Y.Z.00. Therefore, the developers typically try to assign numbers starting with X.Y.(Z-1).90 in order to ensure that version numbers always increase but to avoid the confusion that might arise if end users thought the test release was in fact the final release. <br /><br />Another benefit is that it will be much easier for administrators to convince management to deploy fixes. Management is always reluctant to deploy a "beta" or "candidate" release because such a release must have bugs. The reality is that all software has bugs. Even if there are no known bugs in a given release at the time the release is announced it is guarranteed that over time bugs will be discovered and they will be fixed in later releases. A "final" release is simply one that is believed to build and run on all supported platforms without known faults.<br /><br />The requirement that a "final" release build and run on all supported platforms including all new Linux kernels often results in significant delays before important bug fixes can make it out to the user community. For example, at the AFS & Kerberos Best Practice Workshop a demonstration was given of a bug fix to a problem in the 1.4.1 file serverthat adversely affects client mobility. The bug fix was committed on June 3rd and yet it has taken until October 17th before a 1.4.2 final release to be issued. In the meantime, more than four months of end user frustration has accumulated and many sites have deployed 1.4.1 on their file servers instead of one of the "beta" or "candidate" releases that contained the fix.<br /><br />In speaking with end users, as long as the version label does not contain negative terminology they can push out any build that is recommended. However, once doubt is raised regarding the quality of the release in the minds of management all bets are off.<br /><br />It is my hope that OpenAFS and other open source projects will abandon the traditional release labeling and replace it with incremental build numbers and platform specific recommendations.Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-11240981060396273792006-09-08T12:42:00.001-04:002009-02-23T15:24:48.564-05:00OpenAFS for Windows September 2006 Status Report is now availableThe OpenAFS for Windows September 2006 Status Report is now available:<br /><br /><a href="http://www.secure-endpoints.com/talks/OpenAFS-Windows-Sep-2006-Status-Report.pdf">http://www.secure-endpoints.com/talks/OpenAFS-Windows-Sep-2006-Status-Report.pdf</a><br /><br />For the complete list of changes since the 1.2 release see: <a href="http://www.openafs.org/dl/openafs/1.5.8/winnt/afs-changes-since-1.2.txt">http://www.openafs.org/dl/openafs/1.5.8/winnt/afs-changes-since-1.2.txt</a> <br /><br />and of course be sure to read the Release Notes:<br /> <a href="http://www.openafs.org/dl/openafs/1.5.8/winnt/relnotes-frames.htm">http://www.openafs.org/dl/openafs/1.5.8/winnt/relnotes-frames.htm</a> <br /><br />As always I encourage all organizations and individuals who wish to support the development of OpenAFS for Windows to contact me. Financial contributions as well as in kind assistance are seriously appreciated.Tax deductible donations may be made via the OpenAFS account operated by Usenix (a 501c3 not for profit corporation.)Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-43117550814216317792006-07-25T17:22:00.001-04:002009-02-23T15:25:08.128-05:00Kerberos for Windows 3.1 Beta updateKFW 3.1 Beta 1 has been tagged and installers have been built. <br /><br />An official announcement is soon to come.Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-87067624857880458122006-07-20T08:57:00.000-04:002008-10-08T20:43:57.746-04:00OpenAFS for Windows Large File Support is now availableSeveral weeks ago, OpenAFS.org <a href="http://lists.openafs.org/pipermail/openafs-announce/2006/000155.html">announced the release of OpenAFS for Windows 1.5.3</a>. For Windows this release contains three major changes:<br><br><ul><li>First, there are significant changes in the CIFS server compatibility with the Windows CIFS client. The result should be a major improvement in the interaction with the Explorer Shell. </li><li>Second, this is the first AFS client on Windows to support the <b>Inline Bulk Status RPCs</b>. The reason that this is a big deal is that instead of performing one RPC per directory entry the Windows client now performs one RPC for every 50 directory entries. In addition, directory entries that are unreadable due to access permissions are temporarily cached as expired callbacks. This significantly reduces the time required to obtain a directory listing or create/open/delete files.</li><li>Third, for the first time the Windows AFS client is capable of supporting <b>64-bit Large Files</b>. You can now use AFS to store DVD images.</li></ul><br>With all of these changes I bet you can't wait to get your hands on <a href="http://www.openafs.org/release/openafs-1.5.3.html">this release</a>. Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-47719559695824745852006-07-20T08:23:00.001-04:002009-02-23T15:25:51.564-05:00At long last, Kerberos for Windows 3.1 is nearing releaseAfter more than seven months of waiting MIT Kerberos for Windows 3.1 is finally going to begin beta testing. This release will fix all of the bugs that plagued KFW 3.0 Network Identity Manager 1.0. The memory leaks, the principal name rejections, the non-en_US locale problems. They are all gone.<br /><br />In addition, KFW 3.1 will not crash on Windows XP64 WOW64 simply because Microsoft failed to actually export tickets from the LSA even though the functions succeed. (This problem is fixed in Vista Beta 2.)<br /><br />Along with KFW 3.1 will be a new version of the AFS plug-in installer. Now that the NetIDMgr APIs are truly stable the AFS plug-in can be integrated into an OpenAFS release. That said, the KFW 3.1 release will install on 64-bit Windows systems but there will not be 64-bit native support until KFW 3.2.<br /><br />The most dramatic changes in KFW 3.1 are in the NetIDMgr. The user interface is much more fleshed out. In the main window tickets can be sorted in arbitrary ways by clicking the column headers. The View->Columns menu allows the user to select which ticket properties should be displayed. The properties include everything from principal names to lifetimes to enc-types to key version numbers. Tool Tips are supported for the toolbar. The font used to display tickets is user selectable. Identities can be added without obtaining credentials via the Global Identity Settings. The Kerberos 5 Realm configuration editor is complete. Kerberos 5 credentials can now be obtained from FILE: ccaches. In general, the overall look and feel of the tool is significantly improved.<br /><br />I look forward to being able to announce on Tuesday July 25th that the installers are available for <a href="http://web.mit.edu/kerberos/">download</a>.Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0tag:blogger.com,1999:blog-3333505129375354922.post-83952810374953484642006-06-12T23:08:00.001-04:002009-02-23T15:26:38.392-05:00AFS & Kerberos Best Practice Workshop 2006AFS & Kerberos Best Practice Workshop this week <br /><br /><a href="http://workshop.openafs.org/afsbpw06/">http://workshop.openafs.org/afsbpw06/</a>Secure Endpointshttp://www.blogger.com/profile/16282062715438996079noreply@blogger.com0