OSCER-USERS-L Archives

OSCER users

OSCER-USERS-L@LISTS.OU.EDU

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Henry Neeman <[log in to unmask]>
Reply To:
Henry Neeman <[log in to unmask]>
Date:
Mon, 17 Apr 2023 09:42:57 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (69 lines)
OSCER users,

REMINDER:

Schooner scheduled maintenance outage Wed Apr 19 8am-midnight CT

Before the maintenance period:

Jobs that wouldn't finish before the maintenance period starts
won't be able to start at all.

Instead, such jobs will remain pending in the batch queue
until after the maintenance is completed.

That way, such jobs can run for the full wall clock runtime
that they've requested.

So, if you want a job to run before the maintenance period
begins, then in your batch scripts, you might want to
reduce the amount of wall clock runtime you request.

(Once the maintenance period ends, that won't apply any more.)

Henry

----------

On Thu, 13 Apr 2023, Henry Neeman wrote:

>OSCER users,
>
>Schooner scheduled maintenance outage Wed Apr 19 8am-midnight CT
>
>We'll do the following:
>
>(1) Update the Linux kernel, which we hope will resolve
>a bug that we've recently tripped in the Network File System
>(NFS) software.
>
>(If this upgrade doesn't resolve the bug, we have a workaround,
>but we'd rather have a bug fix.)
>
>(2) Deploy OURdisk metadata/monitor/manager servers.
>
>(3) If time permits, update Slurm to the most recent stable
>major version (# 22).
>
>This will give us the ability to create a "burst buffer" for
>applications that do large numbers of tiny reads.
>
>Soon, we will deploy a burst buffer server, with 16 NVMe SSDs
>(~40 TB usable), hopefully in production by May -- that'll
>make I/O for such applications significantly faster.
>
>(4) If time permits, physically shift support and diskfull
>servers within the support/diskfull racks, to better utilize
>power and cooling capacity for those functions.
>
>(5) If time permits, installat Power Distribution Unit (PDU)
>power strips in compute/GPU racks, which will increase
>the power capacity for compute/GPU nodes, both OSCER-owned
>and condominium.
>
>We apologize for the inconvenience -- as always, our goal
>is to make OSCER resources better!
>
>The OSCER Team ([log in to unmask])
>

ATOM RSS1 RSS2