Электронный архив НГУ

Объединение вычислительных кластеров для крупномасштабного численного моделирования в проекте numgrid

Показать сокращенную информацию

dc.contributor.author Городничев, Максим Александрович ru_RU
dc.contributor.author M. A. Gorodnichev en_EN
dc.creator Новосибирский государственный университет ru_RU
dc.creator Novosibirsk State University en_EN
dc.creator Институт вычислительной математики и математической геофизики СО РАН ru_RU
dc.creator Institute of Computational Mathematics and Mathematical Geophysics SB RAS en_EN
dc.date.accessioned 2013-02-27T15:37:56Z
dc.date.available 2013-02-27T15:37:56Z
dc.date.issued 2013-02-27
dc.identifier.issn 1818-7900
dc.identifier.uri http://www.nsu.ru/xmlui/handle/nsu/256
dc.description.abstract Представлен программный комплекс NumGRID для организации вычислений на объединении высокопроизводительных вычислительных кластеров в целях крупномасштабного численного моделирования. Дан анализ проблем организации распределенных вычислений на кластерах и обзор родственных проектов. Продемонстрированы результаты экспериментального исследования системы NumGRID. ru_RU
dc.description.abstract The paper analyzes the problems of joining computing clusters for large scale numerical simulation in the NumGRID project, discusses solutions with respect to related works. The outline of the NumGRID software system and the results of its experimental evaluation are presented. en_EN
dc.language.iso ru ru_RU
dc.publisher Новосибирский государственный университет ru_RU
dc.subject NumGRID ru_RU
dc.subject MPI ru_RU
dc.subject грид ru_RU
dc.subject распределенные вычисления ru_RU
dc.subject кластерные вычисления ru_RU
dc.subject cluster computing en_EN
dc.subject distributed computing en_EN
dc.subject grid en_EN
dc.subject NumGRID en_EN
dc.subject MPI en_EN
dc.title Объединение вычислительных кластеров для крупномасштабного численного моделирования в проекте numgrid ru_RU
dc.title.alternative Joining computing clusters for large scale numerical simulations in the numgrid project en
dc.type Article ru_RU
dc.description.reference 1. Воеводин Вл. В., Жолудев Ю. А., Соболев С. И., Стефанов К. С. Эволюция системы метакомпьютинга X-Com // Вестн. Нижегород. гос. ун-та им. Н. И. Лобачевского. 2009. № 4. C. 157–164. 2. Oboril F., Tahoori M. B., Heuveline V., Lukarski D., Weiss J.-Ph. Fault Tolerance Technique for Iterative Solvers / Karlsruhe Institute of Technology. URL: http://www.emcl.kit.edu/preprints/emcl-preprint-2011-10.pdf. 3. Peng Du, Luszczek P., Dongarra J. High Performance Dense Linear System Solver with Soft Error Resilience // Proc. of International Conference on Cluster Computing (CLUSTER), Austin, TX, USA, September 26–30. 2011. P. 272–280. 4. Malyshkin V. E., Perepelkin V. A. LuNA Fragmented Programming System, Main Functions and Peculiarities of Run-Time Subsystem // Proc. of the 11th Conference on Parallel Computing Technologis, LNCS 6873. Springer, 2011. Р. 53–61. 5. Abramov S., Adamovich A., Inyukhin A., Moskovsky A., Roganov V., Shevchuk E., Shevchuk Yu., Vodomerov A. OpenTS: An Outline of Dynamic Parallelization Approach // Proc. of. PaCT 2005 Conference. Krasnoyarsk, Russia, September 5–9, 2005. Springer, 2005. Vol. 3606. Р. 303–312. 6. Grelck C., Penczek F. Implementation Architecture and Multithreaded Runtime System of SNet // Implementation and Application of Functional Languages. 20th International Symposium, IFL’08. Hatfield, United Kingdom, 2010. 7. Fougere D., Gorodnichev M., Malyshkin N., Malyshkin V., Merkulov A., Roux B. NumGrid Middleware: MPI Support for Computational Grids // Parallel Computing Technologies: 8th International Conference, PaCT 2005. Krasnoyarsk, Russia, September 5–9, 2005. Springer, 2005. Vol. 3606. Р. 313–320. 8. George W. L., Hagedorn J. G., Devaney J. E. IMPI: Making MPI Interoperable //Journal of Research of the National Institute of Standards and Technology. 2000. Vol. 105. Р. 343–428. 9. Karonis N. T., Toonen B., Foster I. MPICH-G2: A Grid-Enabled Implementation of the Message Passing Interface // Journal of Parallel and Distributed Computing. Special Issue on Computational Grids. 2003. Vol. 63. Is. 5. 10. Gabriel E., Resch M., Ruehle R. Implementing MPI with Optimized Algorithms for Metacomputing // Proc. Message Passing Interface Developer's and User's Conference (MPIDC'99). Atlanta, GA, 1999. Р. 31–41. 11. Kielmann Th., Hofman R. F. H., Bal H. E., Plaat A., Bhoedjang R. A. F. MagPIe: MPI’s Collective Communication Operations for Clustered Wide Area Systems // Proc. of the 7th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. N. Y., 1999. 12. Takano R., Matsuda M., Kudoh T., Kodama Y., Okazaki F., Ishikawa Y., Yoshizawa Y. IMPI Relay Trunking for Improving the Communication Performance on Private IP Clusters // CCGrid2008, 2008. 13. Foster I., Kesselman C., Tuecke S. The Nexus Approach to Integrating Multithreading and Communication // Journal of Parallel and Distributed Computing. 1996. Vol. 37. Р. 70–82. 14. Foster I., Geisler J., Gropp W., Karonis N., Lusk E., Thiruvathukal G., Tuecke S. A WideArea Implementation of the Message Passing Interface // Paral. Comp. 1998. Vol. 24. Is. 12. Р. 1735–1749. 15. Srisuresh P., Egevang K. Traditional IP Network Address Translator (Traditional NAT). Network Working Group, Request for Comments: 3022. – URL: http://tools.ietf.org/html/rfc3022. Дата обращения: 15.12.2011. 16. Yoshio Tanaka, Mitsuhisa Sato, Motonori Hirano, Hidemoto Nakada, Satoshi Sekiguchi. Performance Evaluation of a Firewall-Compliant Globus-Based Wide-Area Cluster System // Proc. of the 9th IEEE International Symposium on High Performance Distributed Computing IEEE Computer Society. Washington, DC, USA, 2000. 17. Aulwes R. T., Daniel D. J., Desai N. N., Graham R. L., Risinger L. D., Taylor M. A., Woodall T. S., Sukalski M. W. Architecture of LA-MPI, A Network-Fault-Tolerant MPI // 18th International Parallel and Distributed Processing Symposium (IPDPS'04). 2004. Vol. 1. Р. 15. 18. Baran P. On Distributed Communications Networks // IEEE Transactions on Communication Systems. 1964. Vol. CS-12 (1). Р. 1–9. 19. Bernaschi M., Iannello G. Collective Communication Operations: Experimental Results vs. Theory // Concurrency: Practice and Experience. 1998. Vol. 10 (5). Р. 359–386. 20. Kale L. V., Bohm E., Mendes C. L., Wilmarth T., Zheng G. Programming Petascale Applications with Charm++ and AMPI. Petascale Computing: Algorithms and Applications. Chapman & Hall / CRC Press, USA. 2008. Р. 421–441. 21. Xiaohui Wei, Hongliang Li, Dexiong Li. MPICH-G-DM: An Enhanced MPICH-G with Supporting Dynamic Job Migration // Proc. of the 2009 Fourth ChinaGrid Annual Conference. IEEE Computer Society Washington, DC, USA 2009. 22. Barak A., La'adan O., Shiloh A. Scalable Cluster Computing with MOSIX for Linux // Proc. Linux Expo '99. Raleigh, N.C., 1999. Р. 95–100. 23. Foster I., Karonis N. T., Kesselman C., Koenig G., Tuecke S. A Secure Communications Infrastructure for High-Performance Distributed Computing // HPDC '97 Proc. of the 6th IEEE International Symposium on High Performance Distributed Computing. Washington, DC, USA, 1997. 24. Farinacci D., Li T., Hanks S., Meyer D., Traina P. Generic Routing Encapsulation (GRE). // Network Working Group, Request for Comments: 2784, March 2000. URL: http://tools.ietf.org/html/rfc2784. 25. Townsley W., Valencia A., Rubens A., Pall G., Zorn G., Palter B. Layer Two Tunneling Protocol «L2TP» // Network Working Group, Request for Comments: 2661, August 1999. URL: http://tools.ietf.org/html/rfc2661. 26. Fagg G. E., Dongarra J. J. FT-MPI: Fault Tolerant MPI, Supporting Dynamic Applications in a Dynamic World // Recent Advances in Parallel Virtual Machine and Message Passing Interface, LNCS. Springer, 2000. Vol. 1908/2000. Р. 346–353. ru_RU
dc.subject.udc 004.75
dc.relation.ispartofvolume 10
dc.relation.ispartofnumber 4
dc.relation.ispartofpages 63-73


Файлы в этом документе

Данный элемент включен в следующие коллекции

Показать сокращенную информацию