results from my artificial neural net

Go To Last Post
31 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

run a small xor gate through the system and it came up with these results:

 

A - B

0   0   0.002464

0   1   0.988089

1   0   0.989219

1   1   0.028705

 

I'll take that has a success....

 

Wm.

This topic has a solution.
Last Edited: Wed. Aug 26, 2020 - 03:49 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

Oh that sounds like the beginning of a decision cutting plane 

 

hot dog or not hot dog now you can decide

 

https://youtu.be/ACmydtFDTGs

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

found a wee bug, now the results are in line with

 

A    B

0  - 0   0.001

1 -  0   0.998

0 -  1   0.998

1 -  1   0.001

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hot dog!

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

avrcandies wrote:

Hot dog!

 

what are you banging on about?  lol

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

what are you banging on about?  lol

Since you are making a separator, seemed like this is where you are headed  https://youtu.be/ACmydtFDTGs

 

But I think you've already been there, hot dog    ....may the lucky bun be with you

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The hot dog app on HBOs Silicon Valley, good fun. Take care not to make a Bender.

 

https://www.youtube.com/watch?v=0qBlPa-9v_M

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

guys, I'm testing a simple XOR gate on the neural net.  2 input layers, 1 output layer, and 5 or more hidden layers.  And it works.

 

But when I tested it with 2 input layers, 4 hidden layers and one output layer it produces incorrect results.   I would have assumed 4 hidden layers would have been optional.

 

Anyone got an opinion  on these results?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I might have found the Hot Dog how-to

 

https://www.tensorflow.org/tutorials/quickstart/beginner

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

feck your hot dogs, I got it working.  It will learn an XOR gate with 3 or more hidden cells.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

:-), now show the code or it did not happen.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'll show my code tomorrow, have a funeral to attend....

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Guys, that was some funeral, Jesus! 21 hours of beating drink into you.

 

Only in Ireland.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

designed a generic algorithm for back propagating neural nets.  It works with 1 and 3 hidden layers, when I don't update the bias weights.  When I include code to update the bias weights then it fails completely.  

 

So the error must be because it is not following the path of decent properly.  My guess!  

 

Anyone suggest some things I should try.  The update factor is 0.01

 

Wm. 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

For anyone interested this is the basic layout of the back propagation algorithm it's listedd below.  It needs tidied up.

void back_propagate_outputs(struct ann_builder * master_net){


	struct neural_layer * output_node;
	struct neural_layer * internal_node;


	struct internal_layer * internal_net;
	struct output_layer * output_net1;

	double weight_adjust = 0.00;

	// get start neural layers
	output_node = master_net->start_ptr;
	

	// search for the output neurons
	while(output_node->type != OUTPUT)
		output_node = output_node->next;

	internal_node = output_node->prev;

	// find last internal layer
	internal_net = output_node->prev->data_set_ptr;
	output_net1 = output_node->data_set_ptr;

	// weight updates
	for(int i = 0; i < output_node->cells; i++){
		for(int j = 0; j < internal_node->cells; j++){

			// find the dirivatives
			weight_adjust = layer_out_update(output_net1[i].sigmod, output_net1[i].target, internal_net[j].sigmod);

			// update weights
			internal_net[j].internal_wieghts_adjust[i] = update_weights(internal_net[j].internal_layer_wieghts[i], weight_adjust);
		}
	}




	// Bias Update on all nodes
	for (int i = 0; i < output_node->cells; i++)
		output_net1[i].bias_adjust = layer_out_update(output_net1[i].sigmod, output_net1[i].target, output_net1[i].bias);
		




	struct input_layer 		* input_layer 		= output_node->prev->prev->data_set_ptr;
	struct internal_layer 	* next_internal 	= output_node->prev->prev->data_set_ptr;

	struct internal_layer 	* internal_layer 	= output_node->prev->data_set_ptr;



	struct neural_layer 	* int_node = output_node->prev;
	struct neural_layer 	* next_node = output_node->prev->prev;


	double dedo;
	double dodzo;


	for(int j = 0; j < int_node->cells; j++){

		internal_net[j].equation_holder = 0;

		for(int i = 0; i < output_node->cells; i++){

			dedo = output_net1[i].sigmod - output_net1[i].target;
			dodzo = output_net1[i].sigmod * (1 - output_net1[i].sigmod);

			internal_net[j].weight_calc[i] = dedo * dodzo;
			internal_net[j].equation_holder += internal_net[j].weight_calc[i] * internal_net[j].internal_layer_wieghts[i];
		}
	}


	do {


		/**
		 * Do next backward propagation!!!
		 */
		for (int i = 0; i < int_node->cells; i++)
			internal_net[i].h1zh1 = internal_layer[i].sigmod * (1 - internal_layer[i].sigmod);


		for (int j = 0; j < int_node->cells; j++) {
			for (int i = 0; i < next_node->cells; i++) {

				if (next_node->type == INPUT) {

					input_layer[i].weight_adjusts[j] = internal_net[j].equation_holder * internal_net[j].h1zh1 * input_layer[i].input_value;
					input_layer[i].weight_adjusts[j] = update_weights(input_layer[i].weights[j], input_layer[i].weight_adjusts[j]);
				}

				if (next_node->type == HIDDEN) {

					next_internal[i].internal_wieghts_adjust[j] = internal_net[j].equation_holder * internal_net[j].h1zh1 * next_internal[i].sigmod;
					next_internal[i].internal_wieghts_adjust[j] = update_weights(next_internal[i].internal_layer_wieghts[j], next_internal[i].internal_wieghts_adjust[j]);
				}
			}
		}


		// quit algoritm, processing backward pass finished!!!
		if (next_node->type == INPUT)
			return;


		static int look = 0;
		for (int j = 0; j < next_node->cells; j++) {

			next_internal[j].equation_holder = 0;

			for (int i = 0; i < internal_node->cells; i++) {

				double value = 0.00;


				if (look >= internal_node->outputs)
					look = 0;

				for (int k = 0; k < internal_node->outputs; k++)
					value += internal_net[i].weight_calc[k];

				next_internal[j].weight_calc[i] = value * internal_net[i].h1zh1 * internal_net[i].internal_layer_wieghts[look++];
				next_internal[j].equation_holder += next_internal[j].weight_calc[i] * next_internal[j].internal_layer_wieghts[i];
			}
		}

		// ready next back progation layer to process!!!
		int_node = next_node;
		internal_net = int_node->data_set_ptr;


		next_node = next_node->prev;

		input_layer = next_node->data_set_ptr;
		next_internal = next_node->data_set_ptr;

	} while (1);
}

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Oh, there is a error in this part.

		static int look = 0;
		for (int j = 0; j < next_node->cells; j++) {

			next_internal[j].equation_holder = 0;

			for (int i = 0; i < internal_node->cells; i++) {

				double value = 0.00;


				if (look > internal_node->outputs - 1)
					look = 0;

				for (int k = 0; k < internal_node->outputs; k++) {
					value += internal_net[i].weight_calc[k] * internal_net[i].internal_layer_wieghts[k] * internal_net[i].h1zh1;
				}
				

				next_internal[j].weight_calc[i] = value;
				next_internal[j].equation_holder += next_internal[j].weight_calc[i] * next_internal[j].internal_layer_wieghts[i];
			}
		}

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

cracked it, the algorithm can now handle any neural network architecture. 

 

			for (int i = 0; i < internal_node->cells; i++) {

				double value = 0.00;


				if (look >= internal_node->outputs)
					look = 0;

				for (int k = 0; k < internal_node->outputs; k++)
					value += internal_net[i].weight_calc[k] * internal_net[i].internal_layer_wieghts[k];

				value = value * (1 - internal_net[i].sigmod);

				next_internal[j].weight_calc[i] = value * internal_net[i].h1zh1;
				next_internal[j].equation_holder += next_internal[j].weight_calc[i] * next_internal[j].internal_layer_wieghts[i];
			}
		}

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I hate to point it out, but you're spelling 'weight' in two different ways... that's going to confuse the hell out of you at some future time!

 

Neil

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

barnacle wrote:

I hate to point it out, but you're spelling 'weight' in two different ways... that's going to confuse the hell out of you at some future time!

 

Neil

 

thanks, for pointing that out.  smiley

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

found a coding error, now it does not work.  Neural Nets are funny, even with an error in the processing you can still get the correct results..

Strange.

 

EDITED: Was not an error. 

Last Edited: Sun. Aug 2, 2020 - 10:56 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Played about with the generic algorithm and eventually got it working.   Have a look, PS: the spelling mistakes have not been corrected, please ignore them.

void back_propagate_outputs(struct ann_builder * master_net){


	struct neural_layer * output_node;
	struct neural_layer * internal_node;


	struct internal_layer * internal_net;
	struct output_layer * output_net1;

	double weight_adjust = 0.00;

	// get start neural layers
	output_node = master_net->start_ptr;
	

	// search for the output neurons
	while(output_node->type != OUTPUT)
		output_node = output_node->next;

	internal_node = output_node->prev;

	// find last internal layer
	internal_net = output_node->prev->data_set_ptr;
	output_net1 = output_node->data_set_ptr;

	// weight updates
	for(int i = 0; i < output_node->cells; i++){
		for(int j = 0; j < internal_node->cells; j++){

			// find the dirivatives
			weight_adjust = layer_out_update(output_net1[i].sigmod, output_net1[i].target, internal_net[j].sigmod);

			// update weights
			internal_net[j].internal_wieghts_adjust[i] = update_weights(internal_net[j].internal_layer_wieghts[i], weight_adjust);
		}
	}




	// Bias Update on all nodes
	for (int i = 0; i < output_node->cells; i++)
		output_net1[i].bias_adjust = layer_out_update(output_net1[i].sigmod, output_net1[i].target, output_net1[i].bias);
		




	struct input_layer 		* input_layer 		= output_node->prev->prev->data_set_ptr;
	struct internal_layer 	* next_internal 	= output_node->prev->prev->data_set_ptr;

	struct internal_layer 	* internal_layer 	= output_node->prev->data_set_ptr;



	struct neural_layer 	* int_node = output_node->prev;
	struct neural_layer 	* next_node = output_node->prev->prev;


	double dedo;
	double dodzo;


	for(int j = 0; j < int_node->cells; j++){

		internal_net[j].equation_holder = 0;

		for(int i = 0; i < output_node->cells; i++){

			dedo = output_net1[i].sigmod - output_net1[i].target;
			dodzo = output_net1[i].sigmod * (1 - output_net1[i].sigmod);

			internal_net[j].weight_calc[i] = dedo * dodzo;
			internal_net[j].equation_holder += internal_net[j].weight_calc[i] * internal_net[j].internal_layer_wieghts[i];
		}
	}


	do {


		/**
		 * Do next backward propagation!!!
		 */
		for (int i = 0; i < int_node->cells; i++)
			internal_net[i].h1zh1 = internal_layer[i].sigmod * (1 - internal_layer[i].sigmod);


		for (int j = 0; j < int_node->cells; j++) {
			for (int i = 0; i < next_node->cells; i++) {

				if (next_node->type == INPUT) {

					input_layer[i].weight_adjusts[j] = internal_net[j].equation_holder * internal_net[j].h1zh1 * input_layer[i].input_value;
					input_layer[i].weight_adjusts[j] = update_weights(input_layer[i].weights[j], input_layer[i].weight_adjusts[j]);
				}

				if (next_node->type == HIDDEN) {

					next_internal[i].internal_wieghts_adjust[j] = internal_net[j].equation_holder * internal_net[j].h1zh1 * next_internal[i].sigmod;
					next_internal[i].internal_wieghts_adjust[j] = update_weights(next_internal[i].internal_layer_wieghts[j], next_internal[i].internal_wieghts_adjust[j]);
				}
			}
		}


		// quit algoritm, processing backward pass finished!!!
		if (next_node->type == INPUT)
			return;


		// another layer exists, so ready equations
		for (int j = 0; j < next_node->cells; j++) {

			// clear equation calculation
			next_internal[j].equation_holder = 0;

			for (int i = 0; i < internal_node->cells; i++) {

				// clear calculation
				double new_layer_calc = 0.00;


				// combine neuron inputs!!!
				for (int k = 0; k < internal_node->outputs; k++)
					new_layer_calc += internal_net[i].weight_calc[k] * internal_net[i].internal_layer_wieghts[k] * internal_net[i].h1zh1;



				// graadiant of descent!!!
				new_layer_calc = new_layer_calc * (next_internal[j].sigmod * (1 - next_internal[j].sigmod));

				// buils next equations for the next layer!!!
				next_internal[j].weight_calc[i] = new_layer_calc;
				next_internal[j].equation_holder += new_layer_calc * next_internal[j].internal_layer_wieghts[i];
			}
		}


		// ready next back progation layer to process!!!
		int_node = next_node;
		internal_net = int_node->data_set_ptr;


		next_node = next_node->prev;

		input_layer = next_node->data_set_ptr;
		next_internal = next_node->data_set_ptr;

	} while (1);
}

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

few errors corrected in the hidden layers.

		// another layer exists, so ready equations
		for (int j = 0; j < next_node->cells; j++) {

			// clear equation calculation
			next_internal[j].equation_holder = 0;

			for (int i = 0; i < internal_node->cells; i++) {

				// clear calculation
				double new_layer_calc = 0.00;


				// combine neuron inputs!!!
				for (int k = 0; k < internal_node->outputs; k++)
					new_layer_calc += internal_net[i].weight_calc[k] * internal_net[i].internal_layer_wieghts[k];



				// graadiant of descent!!!
				new_layer_calc = (new_layer_calc * ((next_internal[j].sigmod * (1 - next_internal[j].sigmod))) * internal_net[i].h1zh1);

				// buils next equations for the next layer!!!
				next_internal[j].weight_calc[i] = new_layer_calc;
				next_internal[j].equation_holder += new_layer_calc * next_internal[j].internal_layer_wieghts[i];
			}
		}

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

had to make another adjustment to the algorithm.  With large nets that calculations where failing so I took the average of the input nodes.  See below:

 

		/*
		 * An optimization algorithm controls exactly how the weights of
		 * the computational graph are adjusted during training
		 */
		// another layer exists, so ready equations
		for (int j = 0; j < next_node->cells; j++) {

			// clear equation calculation
			next_internal[j].equation_holder = 0;

			for (int i = 0; i < internal_node->cells; i++) {

				// clear calculation
				double new_layer_calc = 0.00;

				// combine neuron inputs!!!
				for (int k = 0; k < internal_node->outputs; k++)
					new_layer_calc += internal_net[i].weight_calc[k] * internal_net[i].internal_layer_wieghts[k] * internal_net[i].h1zh1;

				// average input calculations, PS, we are travelling backwards
				new_layer_calc = new_layer_calc / internal_node->outputs;

				// gradient of descent!!!
				new_layer_calc = new_layer_calc * (next_internal[j].sigmod * (1 - next_internal[j].sigmod));

				// buils next equations for the next layer!!!
				next_internal[j].weight_calc[i] = new_layer_calc;
				next_internal[j].equation_holder += new_layer_calc * next_internal[j].internal_layer_wieghts[i];
			}
		}

EDITED: I've no idea if my equations are correct, especially with multiple hidden layers, but hell, if it works it works.  I did notice that even with errors in the calculations the neural net being as it is would still function correctly

Last Edited: Tue. Aug 4, 2020 - 05:46 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Guys, why would a neural network fail when the hidden layer count is greater than 5.  I know we should never need more than two or maybe three hidden layers.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ANN, "Artificial Neural Network" has been the hardest project I have taken on.  I'm doing it on a PC to save time but it is a completely different problem domain than all the other projects I've worked on.

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Put MNIST through the net for 30minutes and it was about 83% correct, trying another run through, this time for longer...

Last Edited: Wed. Aug 19, 2020 - 09:56 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

MNIST results are reporting an error rate of 8.4%.  Not bad, the ones it fails on even I have a hard time classifying them, lol.

This reply has been marked as the solution. 
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Tidied my application up and run a test through the network, error rate stands at about 4.02%.  Not bad!

Next project is to design a GUI.  

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

It may be worth doing a Github (or GitLab) repository that you can refer to and show what you did with the project. If you add references and whatnot, it may also help others that are looking into this stuff. Who knows someone with a clue might even stumble by and give pointers.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi, 

I have not been going through your code or tried to run it. I know that all this is very fun and I like neural networks myself. 

What I suggest is you visit this website: https://www.kaggle.com/c/digit-r...
You can learn a lot from other people doing the same problem. Kaggle is a website to compete in machine learning and learn from others doing similar problems. 
 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

bjotta wrote:

Hi, 

I have not been going through your code or tried to run it. I know that all this is very fun and I like neural networks myself. 

What I suggest is you visit this website: https://www.kaggle.com/c/digit-r...
You can learn a lot from other people doing the same problem. Kaggle is a website to compete in machine learning and learn from others doing similar problems. 
 

bjotta wrote:

Hi, 

I have not been going through your code or tried to run it. I know that all this is very fun and I like neural networks myself. 

What I suggest is you visit this website: https://www.kaggle.com/c/digit-r...
You can learn a lot from other people doing the same problem. Kaggle is a website to compete in machine learning and learn from others doing similar problems. 
 

 

Class website.  Interesting topics, although achieve error rates of less than 1% is beyond me.  And I mean that quite literally.  Even I (as a human) could not achieve that error rate. lol