Skip to content
This repository has been archived by the owner on Jul 31, 2023. It is now read-only.
/ CPSO-NUMPY Public archive

Source code of Internship Project. MCPSO & DCPSO for training large scale neural networks.

License

Notifications You must be signed in to change notification settings

Rikveet/CPSO-NUMPY

Repository files navigation

CPSO-NUMPY

Brief Introduction

Optimizing neural networks(nn) using particle swarm optimization(pso) or cooperative particle swarm optimization(cpso) has been done in many previous studies. This study introduces two new methods of cpso, merge-cpso and decompose-cpso by replacing the base cpso used for the optimization of decomposed neural networks. Solving high dimensional problems with cooperative particle swarm optimization can introduce the issue of saturation. It can directly affect the resulting solution of a problem by moving the particles arbitrarily. A previous study used random regrouping and factorization in nn decomposition to observe the effects on the performance of training neural networks with cpso using the decomposition methods discussed earlier. This study will observe the effects of using mcpso and dcpso with factorized, and non-factorized decompositions of the neural network. The experiment performed in this study was done over 5 data sets with dimensions ranging from 35 to 827 to compare the performance of optimization algorithms and decompositions. Using the two new algorithms Mcpso and Dcpso this study has found a slight improvement over the base cpso.

Project Report

About

Source code of Internship Project. MCPSO & DCPSO for training large scale neural networks.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published