[Bioclusters] Memory Usage for Blast - question
Malay
mbasu at mail.nih.gov
Thu Mar 10 10:30:23 EST 2005
Dinanath Sulakhe wrote:
> Hi,
> I am not sure if this is the right place to ask this question !!
> I am running Blast (NCBI) parallely on a cluster with 80 nodes. (I am
> running NCBI NR against Itself). Each node is a dual processor.
>
> I am using Condor to submit the jobs to this cluster. The problem I am
> coming across is, whenever two blast jobs (each blast job has 100
> sequences) are assigned on One node (one on each processor), the node
> cannot handle the amount of memory used by the two blast jobs. PBS mom
> daemon on the nodes cannot allocate the memory they need to monitor the
> jobs on the node and they fail, thus killing the jobs.
>
> Condor doesn't recognize this failure and assumes that the job was
> successfully completed, but actually only few sequences get processed
> before the job is killed.
>
> Now the Admin of the Site is asking me if its possible to reduce the
> amount of memory these blast jobs use? He says these jobs are requesting
> about 600-700MB of RAM, and he is asking me to reduce it to atmost 500MB.
>
> Is it possible to reduce the amount of RAM it is requesting by tweaking
> any of the parameters in blast??
>
> My blast options are :
>
> blastall -i $input -o $output -d $db -p blastp -m 8 -F F
>
> Please let me know,
> Thank you,
> Dina
>
> _______________________________________________
> Bioclusters maillist - Bioclusters at bioinformatics.org
> https://bioinformatics.org/mailman/listinfo/bioclusters
I am not very sure what's the installation setup is but in many cases it
can be solved by fragmenting the database into smaller sizes using -v
while running formatdb.
-Malay
More information about the Bioclusters
mailing list