Monday, August 31, 2009

iostat -En

An example of the above command:
# iostat -En
c8d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Model: ST3160215AS Revision: Serial No: 6RA Size: 160.04GB <160039305216 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0
c7t0d0 Soft Errors: 0 Hard Errors: 9 Transport Errors: 0
Vendor: PLEXTOR Product: DVDR PX-712A Revision: 1.09 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 9 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c10t0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: WDC WD40 Product: 0LB-07DNA2 Revision: 7B79 Serial No:
Size: 40.02GB <40020664320 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 76 Predictive Failure Analysis: 0

Sunday, August 30, 2009

Clone boot disk to another disk with bigger partition

Created a 30gb partition in the second disk(c10t0d0)
#format
choose disk1
#fdisk
========================================
Note:
If you receive the EFI label complaint
before you set-up mirror and after OS has been installed on a first
disk make sure you put SMI label on the second disk and partition it
exactly the same. Use format -e disk2 and then issue 'label' - it will
ask you if you want smi or efi label, choose smi. then do: prtvtoc
/dev/rdsk/disk1s2 | fmthard -s - /dev/rdsk/disk2s2

then, assuming your os is installed in s0 setup an mirror like:
#zpool attach rpool disk1s0 disk2s0
==========================================

# prtvtoc /dev/rdsk/c8d0s2| fmthard -s - /dev/rdsk/c10t0d0s2
fmthard: Partition 2 specifies the full disk and is not equal
full size of disk. The full disk capacity is 62492850 sectors.
fmthard: New volume table of contents now in place.

#zpool attach rpool c8d0s0 c10t0d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c10t0d0s0 is part of exported or potentially active ZFS pool rpool. Please see zpool(1M).

#zpool attach -f rpool c8d0s0 c10t0d0s0
Zfs started resilvering process (check with zpool status)
Please be sure to invoke installgrub(1M) to make 'c10t0d0s0' bootable.

# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c10t0d0s0
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 271 sectors starting at 50 (abs 16115)
#

switched bootdisk and started up fine. I didn't want my mirror configuration any more so I did:
#zpool detach rpool c8d0s0

Saturday, August 29, 2009

Using Axigen Command Line Interface

After the use of the axigen-cfg-wizard to setup a domain
I noticed the following after installing the license key and restart of Axigen.

I could not access my mail using webmail or use the GUI
I found the answer in the knowledgebase of Axigen.
After installation of the license belonging to the free version you may only have one domain. I remembered that I had left the original also in the config so I had 2 domains which is not allowed. Therefore no domain was present.

And here comes the CLI to the rescue:

To start the CLI: telnet 127.0.0.1 7000
root@opensolaris:/var/opt/axigen/webadmin# telnet 127.0.0.1 7000
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Welcome to AXIGEN's Command Line Interface
You must login first. For a list of available commands, type HELP
user admin
xxxxx
For a list of available commands, type HELP
+OK: Authentication successful
<#> help
-------------------------------------
The commands available for the Initial context are:
EXIT/QUIT - exits CLI and closes connection to AXIGEN
HELP - prints this help message

LIST Domains [wildcard (ex: domain*)] - lists the domains of this server
LIST domainActivationRequests - lists the requests made for domain registration

CLEAR domainActivationRequests - clear the list of requests not pending

SAVE CONFIG [] - saves the server's running configuration (a suffix will be added)

CONFIG SERVER - enters the Server context

SET admin-password - sets the admin password (max. 32 chars) (OBSOLETE - kept for backward compatibility)

CHANGE password oldPassword newPassword - changes your password (max. 32 chars)

ENTER QUEUE - enters the Queue context
ENTER AACL - enters the Administrative ACL context

CREATE Domain name domainLocation postmasterPassword - creates a domain (changes context)
CREATE Subdomain prefix parentDomain postmasterPassword - creates a subdomain (changes context)
REGISTER Domain domainLocation - registers a domain to the server (changes context)
UNREGISTER Domain name - unregisters a domain from the server

UPDATE Domain name - updates a domain from the server (changes context)
SHOW Domain name [ATTR ] - shows the given domain
-------------------------------------
<#> list domains
The list of domains for the server:
+OK: command successful
<#> create domain name solsan domainlocation /var/opt/axigen/domains/solsan postmasterpassword xxxxx
+OK: command successful
commit
committing changes and switching back to previous context.
This operation might take some time. Please wait....
+OK: command successful

<#> list domains
The list of domains for the server:

name Total Used
------------------------------------------------------------
solsan 20Kb 20Kb

+OK: command successful
<#> help
-------------------------------------
<#> exit
WARNING: all changes made and not committed are lost
connection to AXIGEN closing.
+OK: have a nice day
Connection to 127.0.0.1 closed by foreign host.
root@opensolaris:/var/opt/axigen/webadmin# /etc/init.d/axigen restart
Stopping AXIGEN Mail Server... DONE
Starting AXIGEN Mail Server... DONE
root@opensolaris:/var/opt/axigen/webadmin# /etc/init.d/axigen reload
Reloading AXIGEN configuration... DONE
root@opensolaris:/var/opt/axigen/webadmin# /etc/init.d/axigen restart
Stopping AXIGEN Mail Server... DONE
Starting AXIGEN Mail Server... DONE
root@opensolaris:/var/opt/axigen/webadmin#

Mail from command prompt using Axigen

http://www.axigen.com/kb/show/35

# ln -sf /opt/axigen/bin/sendmail /usr/lib/sendmail
# chown axigen:axigen sendmail
# chmod 6750 sendmail

Note: The chown command will reset the SUID and GUID bits, so you need to issue it before the chmod command.
At the end, in order for any user to be able to send e-mails using the command line sendmail wrapper, you need to insert the respective user in the axigen group.
In my case I added unix user "admin" to axigen and tested to see if I received any mail in axigen

To test:
echo "Coming home for dinner" | mail admin
mailx -s "Axigen conf file" admin < axigen.cfg

Additional some locations:
/var/opt/axigen/run/axigen.cfg
webmail http://10.0.0.4:8000 / 10.0.0.4 for Axigen 7.2
webadmin: http://10.0.0.4:9000

CLI: telnet 10.0.0.4 7000

Axigen Mail on OpenSolaris

In order to receive alerts from my NAS I decided to use Free Axigen Office Edition (Standard)
By setting up email on my NAS I only need a pop-email notifier on my DAW to see if there are alerts. I use poppeeper for this

Axigen provides the whole package including POP3 in a modern interface and easy installation. Officially Axigen is not supported on OpenSolaris but it turned out fine.

Downloaded latest version for Solaris

First removed sendmail because I will use the sendmail functionality from Axigen:
OpenSolaris: pkg uninstall SUNWsndm

Execute as root ./axigen-7.1.4.i386.solaris.run

Then the wizard needs to be launched by issuing one of the following commands, on Solaris and all Linux platforms:

/opt/axigen/bin/axigen-cfg-wizard

This Wizard is using xterm and that was causing some problems:

I tried with VNC;
started vncserver as root and used vncviewer on windows 10.0.0.4:5901
although xclock was working as root i could not start axigen-cfg-wizard.

In google I found: install the following package: ncurses
After that was done I could start the wizard.

=========

If you don't want to use VNC; Axigen provides the following solution for this problem:
Error opening terminal when running axigen-cfg-wizard on Solaris
Quick Link: http://www.axigen.com/kb/show/56
Article updated on 26 January, 2007

Description
Running axigen-cfg-wizard on Solaris returns "Error opening terminal"

Resolution
Because of the non-default path of the terminfo terminal definitions in Solaris, the axigen-cfg-wizard application returns:

Error opening terminal. This can easily be solved by setting the TERMINFO environment variable to the correct terminfo

path:

TERMINFO="/usr/share/lib/terminfo"
export TERMINFO

STEPS:
Logon to your Solaris box
Open a terminal window as admin (standard user created during install of openSolaris) and open xterm session as root with:
pfexec xterm

In this window at the prompt type:
TERMINFO="/usr/share/lib/terminfo"
export TERMINFO

now you can start the wizard
/opt/axigen/bin/axigen-cfg-wizard

I configured everything according standard.
(you can easily change it later using the GUI)

Installing Logwatch on OpenSolaris

source: Caffeinated
Typically, installing Logwatch is fairly trivial. On Linux, you’d just use the package installer command and you’re done. On OpenSolaris, there doesn’t seem to be a packaged version of Logwatch (yet), so installing from the source tarball is necessary. Fortunately, there’s a shell script that performs the installation. The bad news is this script finds /usr/sbin/install which is the Solaris version of install. This version behaves very differently from those found in other Unix variants. The Logwatch installer is expecting the behavior of the install script found on Linux, so it fails miserably on OpenSolaris.

The good news is, there’s a simple solution. Just install the SUNWscp package. This is the “source compatibility package”, which installs numerous commands that help OpenSolaris behave more like other Unix systems. The Logwatch installer script prepends the /usr/ucb directory to the PATH when it runs, so it finds the install script that it is expecting, and thus it installs Logwatch perfectly. The only thing left is to add the crontab entry, as shown at the end of the install output.

One last note about Logwatch, and it concerns that crontab entry. It seems that the default configuration for Logwatch is to print the report rather than sending an email to the default recipient, root. However, the example crontab entry is redirecting all output to /dev/null. So how exactly is one supposed to get a daily report? The answer is to edit the /etc/logwatch/conf/logwatch.conf file, adding Print = no at the end of the file. That tells Logwatch to email the report rather than printing. It’s a mystery to me why that’s the default given the example crontab entry they display during the installation process. But at least it’s easy to fix, and nicely demonstrates how easy it is to customize Logwatch without touching the default configuration files.

Sunday, August 16, 2009

Get serial number of disk using smartmontools

(source: http://cafenate.wordpress.com/2009/02/22/setting-up-smartmontools-on-opensolaris/)
Installed smartmontools 5.38:
tar zxvf smartmontools-5.38.tar.gz
cd smartmontools-5.38
./configure
make
make install
(Smartmontools is by default installed in /usr/local)

The first change to make in /usr/local/etc/smartd.conf is to comment out the DEVICESCAN line, which is fine if you want to scan all disks in your system, but I found that smartmontools didn’t like my rpool disk, and it wanted me to declare the disk types as “scsi” for it to do anything at all. Next we have to tell smartd which disks to monitor, so I added the following lines to the end of the smartd.conf file:

/dev/rdsk/c8t0d0 -d scsi -H -m redalert
/dev/rdsk/c8t1d0 -d scsi -H -m redalert
/dev/rdsk/c8t2d0 -d scsi -H -m redalert
/dev/rdsk/c8t3d0 -d scsi -H -m redalert
/dev/rdsk/c8t4d0s0 -d scsi -H -m redalert


goto /usr/local/sbin

root@dawbckup:/usr/local/sbin# ./smartctl -d scsi -a /dev/rdsk/c7t0d0
smartctl version 5.38 [i386-pc-solaris2.11] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

Serial number: S13PJDWS647014
Device type: disk
Local Time is: Sun Aug 16 15:11:29 2009 WEST
Device supports SMART and is Enabled
Temperature Warning Disabled or Not Supported
SMART Health Status: OK

Current Drive Temperature: 31 C

Error Counter logging not supported
No self-tests have been logged
root@dawbckup:/usr/local/sbin

c8t0d0: S13PJDWS647014
c8t1d0: S13PJDWS304477
c8t2d0: S13PJDWS647020
c8t3d0: S13PJDWS304478
(the disks of my SAN)

So far so good, but what about having smartd run at bootup, and continuously monitoring the disk status? In Linux, you’d use initd, but since this is OpenSolaris, we’ll use the Service Management Framework (SMF) instead. To do that, paste the following text into /var/svc/manifest/site/smartd.xml, change the file ownership to root:sys, and invoke:
pfexec svccfg -v import /var/svc/manifest/site/smartd.xml
Then check that the service is running (svcs smartd), and if not, enable it using pfexec svcadm enable smartd.

<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type="manifest" name="smartd">
  <service
     name="site/smartd"
     type="service"
     version="1">
    <single_instance/>
    <dependency
       name="filesystem-local"
       grouping="require_all"
       restart_on="none"
       type="service">
      <service_fmri value="svc:/system/filesystem/local:default"/>
    </dependency>
    <exec_method
       type="method"
       name="start"
       exec="/usr/local/etc/rc.d/init.d/smartd start"
       timeout_seconds="60">
      <method_context>
        <method_credential user="root" group="root"/>
      </method_context>
    </exec_method>
    <exec_method
       type="method"
       name="stop"
       exec="/usr/local/etc/rc.d/init.d/smartd stop"
       timeout_seconds="60">
    </exec_method>
    <instance name="default" enabled="true"/>
    <stability value="Unstable"/>
    <template>
      <common_name>
        <loctext xml:lang="C">
          SMART monitoring service (smartd)
        </loctext>
      </common_name>
      <documentation>
        <manpage title="smartd" section="1M" manpath="/usr/local/share/man"/>
      </documentation>
    </template>
  </service>
</service_bundle>

A this point we have a managed service that is checking the health of our disks, and if anything comes up, it will send an email to the redalert user.

Saturday, August 15, 2009

iostat command

iostat -xnzc 1

us sy wt id
1 7 0 92
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 348.0 0.0 6340.0 0.0 0.1 0.1 0.2 3 5 c7t0d0
0.0 348.0 0.0 6340.0 0.0 0.1 0.1 0.2 3 5 c7t1d0
0.0 347.0 0.0 6340.0 0.0 0.1 0.1 0.2 3 5 c7t2d0

iostat -xtc 2 2

extended device statistics tty cpu
device r/s w/s kr/s kw/s wait actv svc_t %w %b tin tout us sy wt id
cmdk0 2.6 0.8 134.4 13.7 0.1 0.0 32.7 1 3 0 97 1 1 0 99
fd0 0.0 0.0 0.0 0.0 0.0 0.0 981.3 0 1
sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd3 0.1 21.1 1.8 369.1 0.0 0.0 0.5 0 0
sd4 0.1 21.1 1.8 369.1 0.0 0.0 0.4 0 0
sd5 0.1 21.1 1.8 369.1 0.0 0.0 0.4 0 0
sd6 0.1 0.0 1.3 0.3 0.0 0.0 0.9 0 0
extended device statistics tty cpu
device r/s w/s kr/s kw/s wait actv svc_t %w %b tin tout us sy wt id
cmdk0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 309 0 0 0 100
fd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0


zpool iostat 3

capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
dawpool 2.78G 2.72T 0 16 3 771K
rpool 7.79G 12.2G 2 0 137K 14.0K
---------- ----- ----- ----- ----- ----- -----
dawpool 2.78G 2.72T 0 0 0 0
rpool 7.79G 12.2G 0 0 0 0

Set I/O block size

You must also consider I/O block size before creating a ZFS store this is not something that can be changed later so now is the time. It’s done by adding the –b 64K to the ZFS create command. I chose to use 64k for the block size which aligns with VMWare default allocation size thus optimizing performance. The –s option enables a sparse volume feature aka thin provisioning. In this case the space was available but it is my favorite way to allocate storage.

(Note we use 64k also for formatting the disk in windows so it makes sense to use the same blocksize here)

zfs create -b 64K -V 1800G dawpool/backup

ZFS Management Console WEBMIN

ZFS Web-Based Management

A web-based ZFS management tool is available to perform many administrative actions. With this tool, you can perform the following tasks:

* Create a new storage pool.
* Add capacity to an existing pool.
* Move (export) a storage pool to another system.
* Import a previously exported storage pool to make it available on another system.
* View information about storage pools.
* Create a file system.
* Create a volume.
* Take a snapshot of a file system or a volume.
* Roll back a file system to a previous snapshot.

==============================================
Note: this is what I found regarding Webmin on opensolaris:

1. Install the Sun Webmin package by running - pkg install SUNWwebmin
2. Go to /usr/sfw/lib/webmin and run ./setup.sh
3. Edited /etc/webmin/miniserv.users and added my primary user as follows:
admin:x:101
-This allows you to login to the server
4. Change password for webmin account admin
(At running setup I was not asked to provide a userID/password, so login failed)
To set/change/update password for the user created in step 3:

#cd /usr/sfw/lib/webmin
# ./changepass.pl admin
usage: changepass.pl

This program allows you to change the password of a user in the Webmin
password file. For example, to change the password of the admin user
to foo, you would run:
changepass.pl /etc/webmin admin foo
This assumes that /etc/webmin is the Webmin configuration directory.

5. Edited /etc/webmin/webmin.acl using a copy of the existing entry for root and added a new line and changed root to admin to give access to all the modules
6. Restart webmin using svcadm restart webmin
7. You can now access the WebMin interface at http://:10000/

Resilvering

The process of replacing a drive can take an extended period of time, depending on the size of the drive and the amount of data in the pool. The process of moving data from one device to another device is known as resilvering, and can be monitored by using the zpool status command.

Traditional file systems resilver data at the block level. Because ZFS eliminates the artificial layering of the volume manager, it can perform resilvering in a much more powerful and controlled manner. The two main advantages of this feature are as follows:

*

ZFS only resilvers the minimum amount of necessary data. In the case of a short outage (as opposed to a complete device replacement), the entire disk can be resilvered in a matter of minutes or seconds, rather than resilvering the entire disk, or complicating matters with “dirty region” logging that some volume managers support. When an entire disk is replaced, the resilvering process takes time proportional to the amount of data used on disk. Replacing a 500-Gbyte disk can take seconds if only a few gigabytes of used space is in the pool.
*

Resilvering is interruptible and safe. If the system loses power or is rebooted, the resilvering process resumes exactly where it left off, without any need for manual intervention.

To view the resilvering process, use the zpool status command. For example:

# zpool status tank
pool: tank
state: DEGRADED
reason: One or more devices is being resilvered.
action: Wait for the resilvering process to complete.
see: http://www.sun.com/msg/ZFS-XXXX-08
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
mirror DEGRADED 0 0 0
replacing DEGRADED 0 0 0 52% resilvered
c1t0d0 ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0

In this example, the disk c1t0d0 is being replaced by c2t0d0. This event is observed in the status output by presence of the replacing virtual device in the configuration. This device is not real, nor is it possible for you to create a pool by using this virtual device type. The purpose of this device is solely to display the resilvering process, and to identify exactly which device is being replaced.

Note that any pool currently undergoing resilvering is placed in the DEGRADED state, because the pool cannot provide the desired level of redundancy until the resilvering process is complete. Resilvering proceeds as fast as possible, though the I/O is always scheduled with a lower priority than user-requested I/O, to minimize impact on the system. Once the resilvering is complete, the configuration reverts to the new, complete, configuration. For example:

# zpool status tank
pool: tank
state: ONLINE
scrub: scrub completed with 0 errors on Thu Aug 31 11:20:18 2006
config:

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0

errors: No known data errors

The pool is once again ONLINE, and the original bad disk (c1t0d0) has been removed from the configuration.

Friday, August 14, 2009

SATA Mode set to AHCI

Setup raidZ with AHCI SATA

Updated Asus Bios to 1102 using Alt-F2 during boot (updated Bios file was loaded onto my USB stick). Bios successfully updated.
At first boot after bios update I went into Bios setup and configured in bios SATA Mode to AHCI (was SATA)
After second boot up I destroyed dawpool and did a format to recognize the disks again:


root@dawbckup:~# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c0d1
/pci@0,0/pci-ide@6/ide@0/cmdk@1,0
1. c7t0d0
/pci@0,0/pci1043,82e2@9/disk@0,0
2. c7t1d0
/pci@0,0/pci1043,82e2@9/disk@1,0
3. c7t2d0
/pci@0,0/pci1043,82e2@9/disk@2,0
4. c7t3d0
/pci@0,0/pci1043,82e2@9/disk@3,0
Specify disk (enter its number): ^C
root@dawbckup:~#

zpool create dawpool raidz c7t0d0 c7t1d0 c7t2d0 spare c7t3d0

root@dawbckup:~# zpool create dawpool raidz c7t0d0 c7t1d0 c7t2d0 spare c7t3d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c7t0d0s0 is part of exported or potentially active ZFS pool dawpool. Please see zpool(1M).
root@dawbckup:~# zpool create -f dawpool raidz c7t0d0 c7t1d0 c7t2d0 spare c7t3d0
root@dawbckup:~#


root@dawbckup:~# zpool status
pool: dawpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
dawpool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c7t0d0 ONLINE 0 0 0
c7t1d0 ONLINE 0 0 0
c7t2d0 ONLINE 0 0 0
spares
c7t3d0 AVAIL

errors: No known data errors

pool: rpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c0d1s0 ONLINE 0 0 0

errors: No known data errors
root@dawbckup:~#

Wednesday, August 12, 2009

install package from commandline (gcc compiler)

root@dawbckup:/usr/sbin# which gcc
no gcc in /usr/sbin /usr/bin
root@dawbckup:/usr/sbin# pfexec pkg install gcc-dev
DOWNLOAD PKGS FILES XFER (MB)
Completed 20/20 3095/3095 37.97/37.97

PHASE ACTIONS
Install Phase 3932/3932
PHASE ITEMS
Reading Existing Index 8/8
Indexing Packages 20/20
root@dawbckup:/usr/sbin#

Create raidz pool dawpool with 3 disks with hot spare

I bought 4 samsung 1TB disks with 32M cache to build a raidz storage consisting of 3 disks with a hot spare:

After i finished installing the disks in my server it was time to discover the new disks:

root@dawbckup:~# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c0d1
/pci@0,0/pci-ide@6/ide@0/cmdk@1,0
1. c5d0
/pci@0,0/pci-ide@9/ide@0/cmdk@0,0
2. c5d1
/pci@0,0/pci-ide@9/ide@0/cmdk@1,0
3. c6d0
/pci@0,0/pci-ide@9/ide@1/cmdk@0,0
4. c6d1
/pci@0,0/pci-ide@9/ide@1/cmdk@1,0
Specify disk (enter its number): ^C

We see c0d1 as OS disk holding OpenSolaris

root@dawbckup:~# zpool create dawpool raidz c5d0 c5d1 c6d0 spare c6d1
root@dawbckup:~#

root@dawbckup:~# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
dawpool 2.72T 1.22M 2.72T 0% ONLINE -
rpool 20G 7.53G 12.5G 37% ONLINE -
root@dawbckup:~# zpool status
pool: dawpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
dawpool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c5d0 ONLINE 0 0 0
c5d1 ONLINE 0 0 0
c6d0 ONLINE 0 0 0
spares
c6d1 AVAIL

errors: No known data errors

pool: rpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c0d1s0 ONLINE 0 0 0

errors: No known data errors
root@dawbckup:~#

root@dawbckup:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
dawpool 95.2K 1.78T 25.3K /dawpool
rpool 9.41G 10.3G 78K /rpool
rpool/ROOT 5.61G 10.3G 18K legacy
rpool/ROOT/be_name 5.51G 10.3G 3.37G /
rpool/ROOT/nfo2 57K 10.3G 2.94G /
rpool/ROOT/nfo_setup 39K 10.3G 2.24G /
rpool/ROOT/nfo_static 60K 10.3G 3.07G /
rpool/ROOT/opensolaris 82.6M 10.3G 2.81G /
rpool/ROOT/opensolaris-1 24.8M 10.3G 3.33G /
rpool/dump 1.87G 10.3G 1.87G -
rpool/export 39.8M 10.3G 19K /export
rpool/export/home 39.8M 10.3G 19K /export/home
rpool/export/home/admin 39.8M 10.3G 39.8M /export/home/admin
rpool/swap 1.87G 12.2G 66K -
root@dawbckup:~# zfs create -V 1700G dawpool/backup
root@dawbckup:~#


root@dawbckup:~# zfs get shareiscsi dawpool
NAME PROPERTY VALUE SOURCE
dawpool shareiscsi off default
root@dawbckup:~#
root@dawbckup:~# zfs set shareiscsi=on dawpool
root@dawbckup:~# zfs get shareiscsi dawpool
NAME PROPERTY VALUE SOURCE
dawpool shareiscsi on local
root@dawbckup:~# iscsitadm list target -v
Target: dawpool/backup
iSCSI Name: iqn.1986-03.com.sun:02:....7c64-....-....-....-....3cd75dac
Alias: dawpool/backup
Connections: 0
ACL list:
TPGT list:
LUN information:
LUN: 0
GUID: 0
VID: SUN
PID: SOLARIS
Type: disk
Size: 1.7T
Backing store: /dev/zvol/rdsk/dawpool/backup
Status: online
root@dawbckup:~#

I fired up the MS ISCSI service in my workstation and it was recognized in diskmanagement

I added as one drive of 1700G and performed a full format (no Quick Format):
Format NTFS with 64K at 17:15 finished 1700GB at (20% in 15 min)