Bioinformatics.org
[NEU MS in Bioinformatics]
Not logged in
  • Log in
  • Bioinformatics.org
    Membership (37454+) Group hosting [?]
  • Wiki
  • Franklin Award
  • Sponsorships
  • Careers
    About bioinformatics
    Bioinformatics training
    Bioinformatics jobs

    Research
    All information groups
    Online databases Online analysis tools Online education tools More tools

    Development
    All software groups
    FTP repository
    SVN & CVS repositories [?]
    Mailing lists

    Forums
    News & Commentary
  • Submit
  • Archives
  • Subscribe

  • Jobs Forum
    (Career Center)
  • Submit
  • Archives
  • Subscribe
  • Distributed Computing Power Proj (empty) - Message forums

    Discussion forums: documents

    Expanded view | Monitor forum | Save place

    Welcome to documents
    Submitted by Unset; posted on Friday, June 01, 2001
    Welcome to documents
    Projects Rough Specs
    Submitted by Robert Henkel; posted on Friday, June 01, 2001
    Distributed Computing Power Project DCPP Rough Specs 6/1/2001 Goal: As of now there is not a natural science person who has come forward with a project they would like worked on. When someone does the specs could change significantly. Until then this document will be used as a starting point of the general ideas for the project. Database: Because this project will be working with large amounts of data some kind of data organization must be in place. An open source database management system like MySQL will be used to store the data and distribute the data sets (tasks) to the clients who will run computations on them. Server software: Because this type of project distributes data sets to many users over the Internet a data server must be in place. In its simplest form the server software will allow clients to connect and checkout datasets from the database. The server software will allow the clients to connect and send data sets (tasks) that have been completed back to the data server. The server software will update the database to keep track of who has what tasks checked out. It will keep track of how long the tasks have been checked out. If a task has been checked out for a very long time that task will be reopened for someone else to check out. The data server software will also check in the completed data and mark it as complete so that the task will not be redistributed to another user after it has been completed. Client software: The client software will allow the client to connect to the server software and retrieve tasks. Once data is retrieved the client software will run computations on the users machine. When completed the software will connect back to the server and upload the data. A client will only have one task checked out at a time. When that task is completed another will then be downloaded and checked out for that users account. Data integrity: Because the data will be sent and retrieved from many different clients from all over the world data integrity will be an issue. It is very important to make sure that the results are not manipulated in any way and that the source data is not manipulated in any way before or during computing time. Some sort of checks will be in place to help keep this from happening. Programming tools: The idea behind the project is to use many computers to do the computing work, Because of this its important that as many people as possible have access to run the client on their platform. In today?s market Windows has the largest share. For this project to be successful there must be a Windows client. In the long run it would be beneficial to have the client run on as many different platforms as possible. At this time Visual Basic is being looked at for the client for Windows and possibly the server software. This may be changed and/or could be mixed with other languages. At this time the Windows platform is the main target for the client.
    Start a new thread:
    You have to be logged in to post a reply.

     

    Copyright © 2016 · Scilico, LLC