[Bioclusters] Memory Usage for Blast - question
Dinanath Sulakhe
sulakhe at mcs.anl.gov
Wed Mar 9 17:09:41 EST 2005
Hi,
I am not sure if this is the right place to ask this question !!
I am running Blast (NCBI) parallely on a cluster with 80 nodes. (I am
running NCBI NR against Itself). Each node is a dual processor.
I am using Condor to submit the jobs to this cluster. The problem I am
coming across is, whenever two blast jobs (each blast job has 100
sequences) are assigned on One node (one on each processor), the node
cannot handle the amount of memory used by the two blast jobs. PBS mom
daemon on the nodes cannot allocate the memory they need to monitor the
jobs on the node and they fail, thus killing the jobs.
Condor doesn't recognize this failure and assumes that the job was
successfully completed, but actually only few sequences get processed
before the job is killed.
Now the Admin of the Site is asking me if its possible to reduce the amount
of memory these blast jobs use? He says these jobs are requesting about
600-700MB of RAM, and he is asking me to reduce it to atmost 500MB.
Is it possible to reduce the amount of RAM it is requesting by tweaking any
of the parameters in blast??
My blast options are :
blastall -i $input -o $output -d $db -p blastp -m 8 -F F
Please let me know,
Thank you,
Dina
More information about the Bioclusters
mailing list