[Bioclusters] limiting job memory size on G5s
Rayson Ho
raysonlogin at yahoo.com
Mon Mar 7 14:06:59 EST 2005
I think SGE handles it by setting the hard limit at job startup time...
Run the following script inside SGE, what is the output??
#!/bin/sh
ulimit -a
Rayson
--- Barry J Mcinnes <Barry.J.Mcinnes at noaa.gov> wrote:
> Hi Chris,
> did you get anywhere with this request?
> We have tried a test PBS cluster and cannot get it to limit the
> execution size either, although we are still trying to get them to
> make this happen.
>
>
> Hi Barry,
>
> This is probably a Grid Engine issue; enforcing memory usage limits
> for
> the purpose of sending suspend/terminate signals etc. has long been
> part
> of the SGE feature set.
>
> You may want to post this message to the Grid Engine users mailing
> list:
> users at gridengine.sunsource.net to see if anyone else is doing
> memory
> limit enforcement on Mac OS X with SGE 6
>
> I did a brief search through the open bug reports but did not see any
> open issues that match your problem.
>
> That said though, I've only seen jobs trip SGE queues into error
> state
> 'E' when the job itself failed in a spectacular manner. It's odd that
> it
> errors out several queues and the runs on a different box.
>
> If you are willing to share your test code I'd be interested in
> trying
> to replicate on one of the G5 clusters I have access to.
>
> Regards,
> Chris
> --
> ------
> Barry Mc Innes
> Email: Barry.J.McInnes at noaa.gov
> Phone: 303-4976231 FAX: 303-4977013
> Smail: NOAA/CDC
> 325 Broadway R/CDC1
> Boulder CO 80305
> ------> _______________________________________________
> Bioclusters maillist - Bioclusters at bioinformatics.org
> https://bioinformatics.org/mailman/listinfo/bioclusters
>
__________________________________
Celebrate Yahoo!'s 10th Birthday!
Yahoo! Netrospective: 100 Moments of the Web
http://birthday.yahoo.com/netrospective/
More information about the Bioclusters
mailing list