Quantcast
Channel: File Services and Storage forum
Viewing all 395 articles
Browse latest View live

folder redirection directory change

$
0
0

hello,

i am an ict technician at a school in belgium.
i have a problem with folder redirection, orginaly the folder redirection was configured for SOME of the users f.e:
my document to \\dc1\homedir\%username%\my documents

now the following problem , my dc1 storage was not enough to keep up with the place of the documents etc

2 weeks ago all of my users have folder redirection to \\dc2\homedir\%username%\my documents.
for the "new users" that didnt used the dc1 before is their no problem but for the users that did use the dc1 is their a problem.
the sync center doesnt remove the \\dc1 location and gives conflicts.
the sync center gives the following locations : \\dc1 and \\dc2.
also when they create a word document and save it under my documents they somethimes get an error, file corrupted or path unfindable. or when they download something it sais u dont have the rights to open while they just downloaded it.
also somethimes f.e they get no icons on the desktop or on the start menu they are just white icons.

the only option i know for now is to delete the user profile on the local pc , log in again.
the user get an temp. account, i log in to my admin again and delete the .bak from the registry
log in again with the new user and the sync center is allright.
also theirs no more problem with word or what so ever.
but this is insane because we have allot of users and allot of pc's its almost impossible to go from pc to pc to remove the users.

thanks in advance !

greeting

jelle



what you guys use for production vm storage?

$
0
0
i mean explicitly not labs / homes but something you roll in the office you don't own yourself :)

would you put into production some free solution with allowed commercial use (no eula violation! ) but with community only / limited vendor support? 

what would be a game changer for you? say going freenas (free) -> truenas (paid) upgrade?

tnx!! :)

Disk punch via powershell and PSExec

$
0
0

All,

I have been struggling on getting some virtual machines to disk punch and getting the right syntax to work.

The below is the powershell script which was posted on "whats up duck". However getting this to run via a scheduled task posed me a number of issues since to get a scheduled task to show on a machine required a reboot (this was done via group policy), I couldn't get it to display or run any way else. The other alternative would be to do a script which would configure a scheduled task and then remove it, this was just a pain.

Content of Write-ZeroesToFreeSpace.ps1"

<#
 .SYNOPSIS
  Writes a large file full of zeroes to a volume in order to allow a storage
  appliance to reclaim unused space.

 .DESCRIPTION
  Creates a file called ThinSAN.tmp on the specified volume that fills the
  volume up to leave only the percent free value (default is 5%) with zeroes.
  This allows a storage appliance that is thin provisioned to mark that drive
  space as unused and reclaim the space on the physical disks.

 .PARAMETER Root
  The folder to create the zeroed out file in.  This can be a drive root (c:\)
  or a mounted folder (m:\mounteddisk).  This must be the root of the mounted
  volume, it cannot be an arbitrary folder within a volume.

 .PARAMETER PercentFree
  A float representing the percentage of total volume space to leave free.  The
  default is .05 (5%)

 .EXAMPLE
  PS> Write-ZeroesToFreeSpace -Root "c:\"

  This will create a file of all zeroes called c:\ThinSAN.tmp that will fill the
  c drive up to 95% of its capacity.

 .EXAMPLE
  PS> Write-ZeroesToFreeSpace -Root "c:\MountPoints\Volume1" -PercentFree .1

  This will create a file of all zeroes called
  c:\MountPoints\Volume1\ThinSAN.tmp that will fill up the volume that is
  mounted to c:\MountPoints\Volume1 to 90% of its capacity.

 .EXAMPLE
  PS> Get-WmiObject Win32_Volume -filter "drivetype=3" | Write-ZeroesToFreeSpace

  This will get a list of all local disks (type=3) and fill each one up to 95%
  of their capacity with zeroes.

 .NOTES
  You must be running as a user that has permissions to write to the root of the
  volume you are running this script against. This requires elevated privileges
  using the default Windows permissions on the C drive.
 #>
 param(
   [Parameter(Mandatory=$true,ValueFromPipelineByPropertyName=$true)]
   [ValidateNotNullOrEmpty()]
   [Alias("Name")]
   $Root,
   [Parameter(Mandatory=$false)]
   [ValidateRange(0,1)]
   $PercentFree =.05
 )
 process{
   #Convert the $Root value to a valid WMI filter string
   $FixedRoot = ($Root.Trim("\") -replace "\\","\\") + "\\"
   $FileName = "ThinSAN.tmp"
   $FilePath = Join-Path $Root $FileName

   #Check and make sure the file doesn't already exist so we don't clobber someone's data
   if( (Test-Path $FilePath) ) {
     Write-Error -Message "The file $FilePath already exists, please delete the file and try again"
   } else {
     #Get a reference to the volume so we can calculate the desired file size later
     $Volume = gwmi win32_volume -filter "name='$FixedRoot'"
     if($Volume) {
       #I have not tested for the optimum IO size ($ArraySize), 64kb is what sdelete.exe uses
       $ArraySize = 64kb
       #Calculate the amount of space to leave on the disk
       $SpaceToLeave = $Volume.Capacity * $PercentFree
       #Calculate the file size needed to leave the desired amount of space
       $FileSize = $Volume.FreeSpace - $SpacetoLeave
       #Create an array of zeroes to write to disk
       $ZeroArray = new-object byte[]($ArraySize)

       #Open a file stream to our file
       $Stream = [io.File]::OpenWrite($FilePath)
       #Start a try/finally block so we don't leak file handles if any exceptions occur
       try {
         #Keep track of how much data we've written to the file
         $CurFileSize = 0
         while($CurFileSize -lt $FileSize) {
           #Write the entire zero array buffer out to the file stream
           $Stream.Write($ZeroArray,0, $ZeroArray.Length)
           #Increment our file size by the amount of data written to disk
           $CurFileSize += $ZeroArray.Length
         }
       } finally {
         #always close our file stream, even if an exception occurred
         if($Stream) {
           $Stream.Close()
         }
         #always delete the file if we created it, even if an exception occurred
         if( (Test-Path $FilePath) ) {
           del $FilePath
         }
       }
     } else {
       Write-Error "Unable to locate a volume mounted at $Root"
     }
   }
 }

So, PSEXEC then utilising a CMD then powershell to execute this script from a list was the best way. You need to run your CMD with your domain admin account (shift and right click CMD and run as different user) because the issues with PSEXEC is it will try to use your local rights to create the service which fails and gives you access is denied.

psexec -d -u domain\username  @C:\users\%username%\desktop\diskpunch\computers.txt cmd /c "\\dom.ain\NETLOGON\Diskpunch\diskpunch.bat"" (We're putting it in netlogon since we're using it again later)

Then put in your password

((IN YOUR COMPUTERS.TXT JUST LIST THE SERVERS AND PLEASE NOTE THE DOUBLE QUOTES ON THE END))

The batch file has the following

powershell.exe -executionpolicy Bypass -command "Get-WmiObject Win32_Volume -filter drivetype=3 | \\Dom.ain\NETLOGON\diskpunch\Write-ZeroesToFreeSpace.ps1"

This may seem simple for some of you but it really got on my nerves getting the config right so I thought I'd share incase anyone else wants to disk punch their Virtual Infrastructure.

Anyway, hope this helps someone.



DFS replication tab error

$
0
0

DFS replication tab error.

Pls help.


AliahMurfy

Access Based Enumeration not working Windows 2012 R2 Datacenter

$
0
0

I am having a hard time figuring out why Access Based Enumeration is not working for me.  I have set and re-set the settings and I'm still able to see folders I should not. I do get denied access on folders I don't have access to.  I have checked effective access which say everything is denied to me but I can still see the folder(s) listed. 

I have the share permissions set to authenticated Users - Full Control

I have the NTFS permissions set to the correct Dept. Groups. - Modify, and Domain Admins - Full Control and Guests - Deny Full Control

Any idea's?

--------Update--------

I believe I had found the issue.  It was a rights issue with a group that added to the local admins group.

Changing from channel 0 to channel 1 on the SCSI card to see the RAID Array.

$
0
0

Hi,

We have a IBM eServer X series 236 running Windows sever 2003 standard. We have an EXP400 with 13 drives as part of a Raid Array connected to a dual channel SCSI card. This past weekend the SCSI channel 0 went bad, the drives do not show up and they all have amber lights on them.

We moved the SCSI cable to Channel 1 and the drive lights went green and all of them looked normal but when we restarted the sever we received the message the "drives are not responding or they are in a different location".

How do we reconfigure the SCSI card from channel 0 to channel 1 so that the drives can be found?

Thanks,

Taino Negro 9

Admin shares on a Windows 2008 R2 server not accessable from Primary Domain controller

$
0
0

When trying to connect to one of my member server from the primary domain controller that it is a member of, I cannot access the administrative shares. EX:\\server\C$ or \\Server\F$. I am able to access all other shares from the same domain controller and am able to access the admin shares from other servers.

The error I get is: Windows cannot access \\server\c$ (in the details section if you click the down arrow it show error code: 0x80070043 The network name cannot be found.

Seems strange to me that the admin shares are accesible from servers other than the primary domain controller and all other shares are accessible to that same DC. I have taken the server off the domain and rejoined, disconnected everyone from the share manually and cannot for the life of me figure this out. I need to get this working to be able to use DFS over my WAN. Please let me know if anyone has dealt with this issue and has found a way to resolve it.

Does Storage Spaces take advantage of NAS drives?

$
0
0

Most of my servers have LSI Raid Cards in them and I usually run Raid 10 and use Western Digital "Red" drives .  My understanding is that Drives that are categorized as "NAS" drives, like the WD Red Series have special error handling logic which makes them more suitable for use in a RAID than ordinary desktop dives.  I'm told that "NAS" drives handle many of the non-fatal disk errors which are consistently happening on a disk drive without taking the disk offline whereas ordinary drives may report a recoverable misread as a fatal error and cause the RAID controller to take the drive offline.  

My question is: Does Storage Spaces take advantage of the special error handling if you setup 4 or more drives in its equivalent of RAID 10?  More specifically, I have a server with just an LSI 9200-8e Host Bus Adapter (HBA) which has no hardware RAID capability but rather configures its drives as simple JBODS.  If I configure 4 or 6 or 8 of its drives as a storage spaces mirror set, will it take advantage of the RAID error handling capabilities of a "NAS" drive.  In other words is it worth it to put in say 8 WD Red drives rather then 8 ordinary WD Desktop drives.  The 4TB Red drives cost $228 a piece while the 4TB Destop drives cost $169 each.  I know that the Reds are money well spent on an LSI RAID controller (say a 9260-8i), but would it just be a waste of money to put them on an HBA and configure them as a mirrored drive under Storage Spaces?


Enterprise policy to deny making "long file names" "long path names" on file systems...how to?

$
0
0

hello

If deepest file in structure is at border of 256 path characters, when someone rename any of parent folders above this file with new longer names, access to this file/s is broken.

Is it possible to deny creation names/path with more than 256 characters on clients/servers? Even to deny renaming if this will make longer than 256 characters in child objects?

FSRM dont have this option. Using robocopy is not option.

I want to be sure than anyone cant go over 256 limitation!!!

thanks

Clustered Storage Spaces & Fibre Channel

$
0
0

Guys,

did anybody implement working configuration with Clustered Storage Spaces built from FC disks? With a SIMPLE spanning volume on top (no need to have fault tolerance)? Technically it would be RAID0(Storage Spaces)-over-RAID5(hardware). 

Thanks! :)


Cheers,

Anton Kolomyeytsev [MVP]

StarWind Software Chief Architect

Profile:  Blog:  Twitter:  LinkedIn:  

Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

User directories not searchable

$
0
0

Have setup roaming user profiles with folder redirection several times without problems, until now.

Server 2012 R2 with folder redirection and roaming user profiles enabled. Settings followed (to best of knowledge, was 4 months ago now):

https://technet.microsoft.com/en-us/library/cc737633%28v=ws.10%29.aspx

Issue is that none of the user folders can be searched. I.e. you search for a file you know is there and 'no items matched your search'.

I have tried several users and several client OSes. Search works in other folders. I have tried changing permissions on the user folders, the root folder has full system access, I have granted one folder ownership by admins, then ownership of the user, then given system full control with no joy.

Deduplication file cloning

$
0
0

Take a golden generalized VHDX, with the intention of creating a few billion copies of it. Can this be done in a few milliseconds with deduplication, using no additional space until the VM's are launched? I was googling for this feature and it is described as a btrfs feature in an Arstechnica article, and this is the command in btrfs:

me@server:~$ cp --reflink=always 200GB_virtual_machine_drive.qcow2 clone_of_200GB_virtual_machine_drive.qcow2

Copy Utility for Data Deduplication and Long Path Names

$
0
0

I thought I'd share a bit of information I learned from several weeks of research.

If you have data deduplication on a volume, most copy utilities will fail. Tested utilities include:

  • TeraCopy
  • Roadkil's Unstoppable Copier
  • FastCopy

Copy utilities are required in our environment because we have long path names (and lots of them). Windows Explorer copy works just fine on deduplicated data, but cannot handle our path names.

The one copy utility that we have tested that works to both properly re-hydrate deduplicated data and to handle long path names is:

  • RoboCopy

AD RMS

$
0
0
puedo instalar AD RMS en el mismo controlador de dominio? solo como prueba.

Widows Search not working with Data Deduplication?

$
0
0

Hi,

I noticed that many files were missing when searching for them on my Server 2012 Fileserver.

After some troubleshooting I noticed that the ones missing, had a Size on Disk of 4KB and the "SparseFile" and "ReparsePoint" (PL) Flags set.

So it looks like they were processed by the enabled Data Deduplication.

Am I missing something here or is it really the case that deduplicated files cannot be indexed by windows search?


2008 Disk Manager extend volume parameter is incorrect

$
0
0

I extended my E drive partition from 500GB to 750GB using disk manager extend volume...this is the error I got after running through the GUI...."the paramater is incorrect"  Any advice?

 

 

Does Windows Search Service drag everything to a crawl?

$
0
0

Just looking for some experienced information so I won't have to set up my own and spend a weekend experimenting. Most of my searching turned up guides on disabling the indexer in XP - 7.

Setup:

I work for a large corporation with several sites scattered around the globe. I'm in manufacturing, but I have a deep interest and a little (tiny) skill in IT. I daily need to search among around 26000 files, mostly PDFs and spreadsheets, in a couple of network folders and their myriad sub-folders. I've been using Windows XP for almost a year, and pointing my indexer at these folders worked really well. The search toolbar gave me practically instant access to anything I needed.

I was recently upgraded to a faster machine running Windows 7, and now search is a bit less straightforward. I managed to find a utility to add the network folders to a Library (super handy) but since they're not indexed, it takes some time and bandwidth to find anything. Not to mention I have to navigate to the libraries instead of simply hit the Windows key. After a lot of research, I figured out that I probably needed to convince our IT department to enable WSS in lieu of WIS on our servers. It turns out the folders I'm searching are on some kind of .NET... application... thing... and can't be indexed. The folders I search are part of a collection of almost 400GB of files, so I suppose moving them into a proper server just for me (and my 6 coworkers) would be a lot of effort for little payoff. In addition, the incredible overhead of indexing the file system, both in processing and storing the index, were touted as reasons this wouldn't be happening. These latter seem a tad obtuse.

I know that in XP (and later) users will often disable the indexer just because it takes forever to finish indexing and in the meantime the computer is practically worthless as the OS constantly flogs the drive its near-Sisyphean task. Indeed, I had to leave my computer running all weekend to build the index through a sluggish network, but after that there was NO performance impact to speak of. Granted, all of these folders are pretty much frozen in time....

So my questions are:

Just how badly does WSS impact server performance? Is it the sort of thing that can be started over a holiday weekend and then expect negligible impact as the index is maintained? It might be worse on systems where the files are changing constantly, but surely a selective inclusion of a few legacy folders that are unlikely to change wouldn't be too intense? I imagine most of the company would benefit from indexing. How reasonable would it be to index folders on request instead of doing the whole thing at once? Is there a simple way to monitor what resources are being used when users walk the file system searching, or would it take some kind of user poll? Just how much space DOES an index take up?

Thanks for sharing your experience.

New design resources for software-defined storage and Storage Spaces in Windows Server

DFS replication group question

$
0
0

So, we have file servers that sync their data to one clustered file server which is a replication partner to ALL file servers. We have over 100 file servers. They basically use one clustered file server to store copies of all file server shares as a backup using DFS. But they need to keep the copies in sync. So, once I move the file server to another forest, replication breaks. It is not supported cross forest by DFS. Would you recommend building a new server in the target and switch replication partners after migration? We have a big concern that it will take a very long time to replicate and the data will be out of sync. The data is very critical on the DFS shares, so we can’t afford to be out of sync for a long time.  We can't migrate all file servers at once.

Please recommend other options for migrating file servers that are members of the same replication group.

Thank you very much!

DFS namespace high availability

$
0
0

Hello,

I have two servers 2012 R2 :

- Server1 is an AD with DFS namespace & replication.

- Server2 is regular server in the domain (not a secondary domain controller) with DFS namespace/replication.

So my domain name is : example.priv, and my share : \\example.priv\SharedFolder .

Some shared folders are sync between both server.

I added Server2 in the namespace servers.

My goal is to have high avalaibility if my Server1 goes down, i still want to access to the share.


if i ping example.priv it returns Server1's IP.

When i stop the Server1, i cant have access to my share anymore, because example.priv resolves only to Server1.

I need example.priv to resolve to Server1 and Server2 (round robin?) 

How could i setup my environment to have access to my share when Server1 is down? 

Thank you,

Viewing all 395 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>