Difference: HundredGigabitTestingLog (11 vs. 12)

Revision 122011-11-14 - igable

Line: 1 to 1
 
META TOPICPARENT name="HundredGigabit"
Added:
>
>
 

November 14, 2011

http://sc-repo.uslhcnet.org

Line: 8 to 10
  Memory to memory started the afternoon of the 13th. The key to improving the performance was moving from hashing to packet based load balancing on the Caltech Brocade.
Added:
>
>
Moving the ATLAS data to the scdemo nodes:

  • The command is: /opt/versions/scdemo/bin/atlas-to-scdemo <file-lists-directory>
  • The content of the "<file-lists-directory>" is one or more files containing a list of paths of the ATLAS files to be copied. The name of each file within this directory corresponds to the fully qualified host name of the destination host. i.e the list of ATLAS files contained within scdemo06.heprc.uvic.ca will be copied via FDT to scdemo06.

November 11, 2011

Testing of MegaCLI -PDClear:

  • Test nodes consisted of scdemo06 and scdemo07. In all tests, scdemo06 was used as the client FDT node pushing 89 x 11gigabyte atlas root files contained on the /ssd filesystem to the /ssd filesystem on scdemo07 which was running the FDT server. Both client and server were running under non-priviled accounts, the scheduling algorithm was set to "noop", and the /ssd filesystems on both systems were hosted by XFS filesystems on hardware RAID0 of six OCZ Deneva2 drive, stripe size of 1MB, and write through cache.

  1. Last test prior to PDClear: Avg: 5.784 Gb/s after 100.00%.
  2. PDClear all drives on scdemo07. RAID0 redefined. XFS filesystem /ssd recreated and remounted.
  3. The first test following PDClear was teminated prematurely because the performance was atrocious (Avg: 3.575 Gb/s after 07.10%). The server had been running as root and the default scheduling algorithm had been in effect.
  4. Corrected the scheduling algorithm and the uid of the server and ran the second test to completion. Result was poor, Avg: 4.655 Gb/s after 100.00%. However, the start had been very good (Avg: 6.539 Gb/s after 01.68%).
  5. A third test was conducted to see if a previously used disk (after the PDClear) performed better. Result gave previously expected level of performance: Avg: 5.661 Gb/s after 100.00%.
  6. Target /ssd erased and completely filled with zeros, erased and test run again. Result: Avg: 5.790 Gb/s after 100.00%
 

November 8, 2011

Line: 272 to 293
 Figure 2: After changing the raid configuration to be write through and using large 10G files created with 'dd' we see a much improved disk to disk throughput (as shown in the two FDT outputs immediately above). Strangely we see that one direction is nearly 0.8 Gbps faster then the other. I don't understand the reason for this yet.
Deleted:
<
<

November 11, 2011

Testing of MegaCLI -PDClear.

  • Test nodes consisted of scdemo06 and scdemo07. In all tests, scdemo06 was used as the client FDT node pushing 89 x 11gigabyte atlas root files contained on the /ssd filesystem to the /ssd filesystem on scdemo07 which was running the FDT server. Both client and server were running under non-priviled accounts, the scheduling algorithm was set to "noop", and the /ssd filesystems on both systems were hosted by XFS filesystems on hardware RAID0 of six OCZ Deneva2 drive, stripe size of 1MB, and write through cache.

  1. Last test prior to PDClear: Avg: 5.784 Gb/s after 100.00%.
  2. PDClear all drives on scdemo07. RAID0 redefined. XFS filesystem /ssd recreated and remounted.
  3. The first test following PDClear was teminated prematurely because the performance was atrocious (Avg: 3.575 Gb/s after 07.10%). The server had been running as root and the default scheduling algorithm had been in effect.
  4. Corrected the scheduling algorithm and the uid of the server and ran the second test to completion. Result was poor, Avg: 4.655 Gb/s after 100.00%. However, the start had been very good (Avg: 6.539 Gb/s after 01.68%).
  5. A third test was conducted to see if a previously used disk (after the PDClear) performed better. Result gave previously expected level of performance: Avg: 5.661 Gb/s after 100.00%.
  6. Target /ssd erased and completely filled with zeros, erased and test run again. Result: Avg: 5.790 Gb/s after 100.00%

November 14, 2011

 
Deleted:
<
<

Moving the ATLAS data to the scdemo nodes.

 
Deleted:
<
<
  • The command is: /opt/versions/scdemo/bin/atlas-to-scdemo <file-lists-directory>
  • The content of the "<file-lists-directory>" is one or more files containing a list of paths of the ATLAS files to be copied. The name of each file within this directory corresponds to the fully qualified host name of the destination host. i.e the list of ATLAS files contained within scdemo06.heprc.uvic.ca will be copied via FDT to scdemo06.
 
META FILEATTACHMENT attachment="cacti_notes_2011-011-06.png" attr="" comment="" date="1320622186" name="cacti_notes_2011-011-06.png" path="cacti_notes_2011-011-06.png" size="80579" user="igable" version="1"
META FILEATTACHMENT attachment="cacti_notes_good_rate_2011-11-06.png" attr="" comment="" date="1320640987" name="cacti_notes_good_rate_2011-11-06.png" path="cacti_notes_good_rate_2011-11-06.png" size="69346" user="igable" version="2"
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback