run a small xor gate through the system and it came up with these results:

A - B

0 0 0.002464

0 1 0.988089

1 0 0.989219

1 1 0.028705

I'll take that has a success....

Wm.

Author

Message

run a small xor gate through the system and it came up with these results:

A - B

0 0 0.002464

0 1 0.988089

1 0 0.989219

1 1 0.028705

I'll take that has a success....

Wm.

This topic has a solution. Jump to the solution.

Last Edited: Wed. Aug 26, 2020 - 03:49 PM

Oh that sounds like the beginning of a decision cutting plane

hot dog or not hot dog now you can decide

found a wee bug, now the results are in line with

A B

0 - 0 0.001

1 - 0 0.998

0 - 1 0.998

1 - 1 0.001

Hot dog!

Hot dog!

what are you banging on about? lol

what are you banging on about? lol

Since you are making a separator, seemed like this is where you are headed https://youtu.be/ACmydtFDTGs

But I think you've already been there, hot dog ....may the lucky bun be with you

The hot dog app on HBOs Silicon Valley, good fun. Take care not to make a Bender.

guys, I'm testing a simple XOR gate on the neural net. 2 input layers, 1 output layer, and 5 or more hidden layers. And it works.

But when I tested it with 2 input layers, 4 hidden layers and one output layer it produces incorrect results. I would have assumed 4 hidden layers would have been optional.

Anyone got an opinion on these results?

feck your hot dogs, I got it working. It will learn an XOR gate with 3 or more hidden cells.

:-), now show the code or it did not happen.

I'll show my code tomorrow, have a funeral to attend....

Guys, that was some funeral, Jesus! 21 hours of beating drink into you.

Only in Ireland.

designed a generic algorithm for back propagating neural nets. It works with 1 and 3 hidden layers, when I don't update the bias weights. When I include code to update the bias weights then it fails completely.

So the error must be because it is not following the path of decent properly. My guess!

Anyone suggest some things I should try. The update factor is 0.01

Wm.

For anyone interested this is the basic layout of the back propagation algorithm it's listedd below. It needs tidied up.

void back_propagate_outputs(struct ann_builder * master_net){ struct neural_layer * output_node; struct neural_layer * internal_node; struct internal_layer * internal_net; struct output_layer * output_net1; double weight_adjust = 0.00; // get start neural layers output_node = master_net->start_ptr; // search for the output neurons while(output_node->type != OUTPUT) output_node = output_node->next; internal_node = output_node->prev; // find last internal layer internal_net = output_node->prev->data_set_ptr; output_net1 = output_node->data_set_ptr; // weight updates for(int i = 0; i < output_node->cells; i++){ for(int j = 0; j < internal_node->cells; j++){ // find the dirivatives weight_adjust = layer_out_update(output_net1[i].sigmod, output_net1[i].target, internal_net[j].sigmod); // update weights internal_net[j].internal_wieghts_adjust[i] = update_weights(internal_net[j].internal_layer_wieghts[i], weight_adjust); } } // Bias Update on all nodes for (int i = 0; i < output_node->cells; i++) output_net1[i].bias_adjust = layer_out_update(output_net1[i].sigmod, output_net1[i].target, output_net1[i].bias); struct input_layer * input_layer = output_node->prev->prev->data_set_ptr; struct internal_layer * next_internal = output_node->prev->prev->data_set_ptr; struct internal_layer * internal_layer = output_node->prev->data_set_ptr; struct neural_layer * int_node = output_node->prev; struct neural_layer * next_node = output_node->prev->prev; double dedo; double dodzo; for(int j = 0; j < int_node->cells; j++){ internal_net[j].equation_holder = 0; for(int i = 0; i < output_node->cells; i++){ dedo = output_net1[i].sigmod - output_net1[i].target; dodzo = output_net1[i].sigmod * (1 - output_net1[i].sigmod); internal_net[j].weight_calc[i] = dedo * dodzo; internal_net[j].equation_holder += internal_net[j].weight_calc[i] * internal_net[j].internal_layer_wieghts[i]; } } do { /** * Do next backward propagation!!! */ for (int i = 0; i < int_node->cells; i++) internal_net[i].h1zh1 = internal_layer[i].sigmod * (1 - internal_layer[i].sigmod); for (int j = 0; j < int_node->cells; j++) { for (int i = 0; i < next_node->cells; i++) { if (next_node->type == INPUT) { input_layer[i].weight_adjusts[j] = internal_net[j].equation_holder * internal_net[j].h1zh1 * input_layer[i].input_value; input_layer[i].weight_adjusts[j] = update_weights(input_layer[i].weights[j], input_layer[i].weight_adjusts[j]); } if (next_node->type == HIDDEN) { next_internal[i].internal_wieghts_adjust[j] = internal_net[j].equation_holder * internal_net[j].h1zh1 * next_internal[i].sigmod; next_internal[i].internal_wieghts_adjust[j] = update_weights(next_internal[i].internal_layer_wieghts[j], next_internal[i].internal_wieghts_adjust[j]); } } } // quit algoritm, processing backward pass finished!!! if (next_node->type == INPUT) return; static int look = 0; for (int j = 0; j < next_node->cells; j++) { next_internal[j].equation_holder = 0; for (int i = 0; i < internal_node->cells; i++) { double value = 0.00; if (look >= internal_node->outputs) look = 0; for (int k = 0; k < internal_node->outputs; k++) value += internal_net[i].weight_calc[k]; next_internal[j].weight_calc[i] = value * internal_net[i].h1zh1 * internal_net[i].internal_layer_wieghts[look++]; next_internal[j].equation_holder += next_internal[j].weight_calc[i] * next_internal[j].internal_layer_wieghts[i]; } } // ready next back progation layer to process!!! int_node = next_node; internal_net = int_node->data_set_ptr; next_node = next_node->prev; input_layer = next_node->data_set_ptr; next_internal = next_node->data_set_ptr; } while (1); }

Oh, there is a error in this part.

static int look = 0; for (int j = 0; j < next_node->cells; j++) { next_internal[j].equation_holder = 0; for (int i = 0; i < internal_node->cells; i++) { double value = 0.00; if (look > internal_node->outputs - 1) look = 0; for (int k = 0; k < internal_node->outputs; k++) { value += internal_net[i].weight_calc[k] * internal_net[i].internal_layer_wieghts[k] * internal_net[i].h1zh1; } next_internal[j].weight_calc[i] = value; next_internal[j].equation_holder += next_internal[j].weight_calc[i] * next_internal[j].internal_layer_wieghts[i]; } }

cracked it, the algorithm can now handle any neural network architecture.

for (int i = 0; i < internal_node->cells; i++) { double value = 0.00; if (look >= internal_node->outputs) look = 0; for (int k = 0; k < internal_node->outputs; k++) value += internal_net[i].weight_calc[k] * internal_net[i].internal_layer_wieghts[k]; value = value * (1 - internal_net[i].sigmod); next_internal[j].weight_calc[i] = value * internal_net[i].h1zh1; next_internal[j].equation_holder += next_internal[j].weight_calc[i] * next_internal[j].internal_layer_wieghts[i]; } }

I hate to point it out, but you're spelling 'weight' in two different ways... that's going to confuse the hell out of you at some future time!

Neil

I hate to point it out, but you're spelling 'weight' in two different ways... that's going to confuse the hell out of you at some future time!

Neil

thanks, for pointing that out.

found a coding error, now it does not work. Neural Nets are funny, even with an error in the processing you can still get the correct results..

Strange.

EDITED: Was not an error.

Last Edited: Sun. Aug 2, 2020 - 10:56 PM

Played about with the generic algorithm and eventually got it working. Have a look, PS: the spelling mistakes have not been corrected, please ignore them.

void back_propagate_outputs(struct ann_builder * master_net){ struct neural_layer * output_node; struct neural_layer * internal_node; struct internal_layer * internal_net; struct output_layer * output_net1; double weight_adjust = 0.00; // get start neural layers output_node = master_net->start_ptr; // search for the output neurons while(output_node->type != OUTPUT) output_node = output_node->next; internal_node = output_node->prev; // find last internal layer internal_net = output_node->prev->data_set_ptr; output_net1 = output_node->data_set_ptr; // weight updates for(int i = 0; i < output_node->cells; i++){ for(int j = 0; j < internal_node->cells; j++){ // find the dirivatives weight_adjust = layer_out_update(output_net1[i].sigmod, output_net1[i].target, internal_net[j].sigmod); // update weights internal_net[j].internal_wieghts_adjust[i] = update_weights(internal_net[j].internal_layer_wieghts[i], weight_adjust); } } // Bias Update on all nodes for (int i = 0; i < output_node->cells; i++) output_net1[i].bias_adjust = layer_out_update(output_net1[i].sigmod, output_net1[i].target, output_net1[i].bias); struct input_layer * input_layer = output_node->prev->prev->data_set_ptr; struct internal_layer * next_internal = output_node->prev->prev->data_set_ptr; struct internal_layer * internal_layer = output_node->prev->data_set_ptr; struct neural_layer * int_node = output_node->prev; struct neural_layer * next_node = output_node->prev->prev; double dedo; double dodzo; for(int j = 0; j < int_node->cells; j++){ internal_net[j].equation_holder = 0; for(int i = 0; i < output_node->cells; i++){ dedo = output_net1[i].sigmod - output_net1[i].target; dodzo = output_net1[i].sigmod * (1 - output_net1[i].sigmod); internal_net[j].weight_calc[i] = dedo * dodzo; internal_net[j].equation_holder += internal_net[j].weight_calc[i] * internal_net[j].internal_layer_wieghts[i]; } } do { /** * Do next backward propagation!!! */ for (int i = 0; i < int_node->cells; i++) internal_net[i].h1zh1 = internal_layer[i].sigmod * (1 - internal_layer[i].sigmod); for (int j = 0; j < int_node->cells; j++) { for (int i = 0; i < next_node->cells; i++) { if (next_node->type == INPUT) { input_layer[i].weight_adjusts[j] = internal_net[j].equation_holder * internal_net[j].h1zh1 * input_layer[i].input_value; input_layer[i].weight_adjusts[j] = update_weights(input_layer[i].weights[j], input_layer[i].weight_adjusts[j]); } if (next_node->type == HIDDEN) { next_internal[i].internal_wieghts_adjust[j] = internal_net[j].equation_holder * internal_net[j].h1zh1 * next_internal[i].sigmod; next_internal[i].internal_wieghts_adjust[j] = update_weights(next_internal[i].internal_layer_wieghts[j], next_internal[i].internal_wieghts_adjust[j]); } } } // quit algoritm, processing backward pass finished!!! if (next_node->type == INPUT) return; // another layer exists, so ready equations for (int j = 0; j < next_node->cells; j++) { // clear equation calculation next_internal[j].equation_holder = 0; for (int i = 0; i < internal_node->cells; i++) { // clear calculation double new_layer_calc = 0.00; // combine neuron inputs!!! for (int k = 0; k < internal_node->outputs; k++) new_layer_calc += internal_net[i].weight_calc[k] * internal_net[i].internal_layer_wieghts[k] * internal_net[i].h1zh1; // graadiant of descent!!! new_layer_calc = new_layer_calc * (next_internal[j].sigmod * (1 - next_internal[j].sigmod)); // buils next equations for the next layer!!! next_internal[j].weight_calc[i] = new_layer_calc; next_internal[j].equation_holder += new_layer_calc * next_internal[j].internal_layer_wieghts[i]; } } // ready next back progation layer to process!!! int_node = next_node; internal_net = int_node->data_set_ptr; next_node = next_node->prev; input_layer = next_node->data_set_ptr; next_internal = next_node->data_set_ptr; } while (1); }

few errors corrected in the hidden layers.

// another layer exists, so ready equations for (int j = 0; j < next_node->cells; j++) { // clear equation calculation next_internal[j].equation_holder = 0; for (int i = 0; i < internal_node->cells; i++) { // clear calculation double new_layer_calc = 0.00; // combine neuron inputs!!! for (int k = 0; k < internal_node->outputs; k++) new_layer_calc += internal_net[i].weight_calc[k] * internal_net[i].internal_layer_wieghts[k]; // graadiant of descent!!! new_layer_calc = (new_layer_calc * ((next_internal[j].sigmod * (1 - next_internal[j].sigmod))) * internal_net[i].h1zh1); // buils next equations for the next layer!!! next_internal[j].weight_calc[i] = new_layer_calc; next_internal[j].equation_holder += new_layer_calc * next_internal[j].internal_layer_wieghts[i]; } }

had to make another adjustment to the algorithm. With large nets that calculations where failing so I took the average of the input nodes. See below:

/* * An optimization algorithm controls exactly how the weights of * the computational graph are adjusted during training */ // another layer exists, so ready equations for (int j = 0; j < next_node->cells; j++) { // clear equation calculation next_internal[j].equation_holder = 0; for (int i = 0; i < internal_node->cells; i++) { // clear calculation double new_layer_calc = 0.00; // combine neuron inputs!!! for (int k = 0; k < internal_node->outputs; k++) new_layer_calc += internal_net[i].weight_calc[k] * internal_net[i].internal_layer_wieghts[k] * internal_net[i].h1zh1; // average input calculations, PS, we are travelling backwards new_layer_calc = new_layer_calc / internal_node->outputs; // gradient of descent!!! new_layer_calc = new_layer_calc * (next_internal[j].sigmod * (1 - next_internal[j].sigmod)); // buils next equations for the next layer!!! next_internal[j].weight_calc[i] = new_layer_calc; next_internal[j].equation_holder += new_layer_calc * next_internal[j].internal_layer_wieghts[i]; } }

EDITED: I've no idea if my equations are correct, especially with multiple hidden layers, but hell, if it works it works. I did notice that even with errors in the calculations the neural net being as it is would still function correctly

Last Edited: Tue. Aug 4, 2020 - 05:46 PM

Last Edited: Wed. Aug 19, 2020 - 09:56 PM

This reply has been marked as the solution. #28

Tidied my application up and run a test through the network, error rate stands at about 4.02%. Not bad!

Next project is to design a GUI.

Hi,

I have not been going through your code or tried to run it. I know that all this is very fun and I like neural networks myself.

What I suggest is you visit this website: https://www.kaggle.com/c/digit-r...

You can learn a lot from other people doing the same problem. Kaggle is a website to compete in machine learning and learn from others doing similar problems.

Hi,

I have not been going through your code or tried to run it. I know that all this is very fun and I like neural networks myself.

What I suggest is you visit this website: https://www.kaggle.com/c/digit-r...

You can learn a lot from other people doing the same problem. Kaggle is a website to compete in machine learning and learn from others doing similar problems.

Hi,

I have not been going through your code or tried to run it. I know that all this is very fun and I like neural networks myself.

What I suggest is you visit this website: https://www.kaggle.com/c/digit-r...

You can learn a lot from other people doing the same problem. Kaggle is a website to compete in machine learning and learn from others doing similar problems.

Class website. Interesting topics, although achieve error rates of less than 1% is beyond me. And I mean that quite literally. Even I (as a human) could not achieve that error rate. lol