• Open Access
    • Comment cela fonctionne?
    • Ouvrir une session
    • Contact

    Voir le document

    JavaScript is disabled for your browser. Some features of this site may not work without it.
    Voir le document 
    • Accueil de LUCK
    • HE en Hainaut
    • HEH - HE
    • Sciences Et Techniques
    • Voir le document
    • Accueil de LUCK
    • HE en Hainaut
    • HEH - HE
    • Sciences Et Techniques
    • Voir le document
    Voir/Ouvrir
    Concurrency-and-Computation (1).pdf (1.158Mo)
    Date
    2021
    Auteur
    LERAT, Jean-Sébastien
    Mahmoudi, Sidi Ahmed
    Mahmoudi, Saïd
    Metadata
    Afficher la notice complète
    Partage ça

    Single node deep learning frameworks: Comparative study and CPU/GPU performance analysis

    Résumé
    Deep learning presents an efficient set of methods that allow learning from massive volumes of data using complex deep neural networks. To facilitate the design and implementation of algorithms, deep learning frameworks provide a high-level programming interface. Based on these frameworks, new models, and applications are able to make better and better predictions. One type of deep learning application is the Internet of Things that can gather a continuous flow of data, which causes an explosion of the amount of data. Therefore, to handle this data management issue, computation technologies can offer new perspectives to analyze more data with more complex models. In this context, a cluster of computers can operate to quickly deliver a model or to enable the design of a complex neural network spread among computers. An alternative is to distribute a deep learning task with HPC cloud computing resources and to scale cluster in order to quickly and efficiently train a neural network. As a first step to design an infrastructure aware framework which is able to scale the computing nodes, this work aims to review and analyze the state-of-the-art frameworks by collecting device utilization data during the training task. We gather information about the CPU, RAM and the GPU utilization on deep learning algorithms with and without multi-threading. The behavior of each framework is discussed and analyzed in order to shed light on the strengths and weaknesses of the different deep learning frameworks.

    Parcourir

    Tout LUCKCommunautés & CollectionsAuteurTitreDate de publicationSujetType de documentTitre de périodiqueThématiqueCette collectionAuteurTitreDate de publicationSujetType de documentTitre de périodiqueThématique

    Mon compte

    Ouvrir une sessionS'inscrire

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    Plan du site

    • Open Access
    • Comment cela fonctionne?
    • Mon compte

    Contact

    • L’équipe de LUCK
    • Synhera
    • CIC