# MySQL help needed



## Cespian

Hey Guys

Hoping there are some SQL Gurus here. I need some assistance (MySQL specifically):

Context; I have 2 tables:

Table 1
--------
Customer_id | Entries
1 | 3
2 | 4
3 | 2

Basically, I need to duplicate the Customer_id in Table 2 for each Entry they have. Hence Table 2 should look like result set below after the insert:

Customer_id
1
1
1
2
2
2
2
3
3

How would I go about scripting this? Unfortunately I do not have the luxury of using another language (Like creating a Loop in PHP would have been perfect) as I have to do this via command line. Considering that there are 300K+ rows in Table1, manually typing INSERT INTO table2 VALUES X entries is impossible. And yes, I am aware that Table2 will most likely exceed a billion rows after the insert is complete (I have over 40Gb available). 

Another issue I may have is that I will be doing this on a production server and may lock up Table1, but I will create a backup of the table first and work from the backup.

Thanks in advance


----------



## yuganp

You can try to cross join with a tempory numbers table where the number of entries in the numbers table will be the maximum number of entries you have. Example below : I don't how efficient this will be 


select customer_id
from (
select *
from
(
select 1 as customer_id, 3 as entries
union
select 2 as customer_id, 4 as entries
union
select 3 as customer_id, 2 as entries
) cust,
(
select 1 as x union
select 2 as x union
select 3 as x union
select 4 as x 
) numbers
) tmp where x <= entries
order by customer_id

Reactions: Like 1 | Thanks 1


----------



## Cespian

yuganp said:


> You can try to cross join with a tempory numbers table where the number of entries in the numbers table will be the maximum number of entries you have. Example below : I don't how efficient this will be
> 
> 
> select customer_id
> from (
> select *
> from
> (
> select 1 as customer_id, 3 as entries
> union
> select 2 as customer_id, 4 as entries
> union
> select 3 as customer_id, 2 as entries
> ) cust,
> (
> select 1 as x union
> select 2 as x union
> select 3 as x union
> select 4 as x
> ) numbers
> ) tmp where x <= entries
> order by customer_id



Efficiency is the least of my worries at this point, however cross joins are quite intense (resource hogger). Many thanks though, I'm going to test this tonight on a subset of 50 or so rows to see if its viable. Thanks


----------



## yuganp

It all depends on the number of records in your numbers table. The last time i did this was it took about 2.5 times longer on a numbers table with 10 entries and about 5 times longer on a numbers table with 20 entries compared with selecting all the records from the original table. This was done with oracle.

Reactions: Thanks 1


----------



## Cespian

yuganp said:


> It all depends on the number of records in your numbers table. The last time i did this was it took about 2.5 times longer on a numbers table with 10 entries and about 5 times longer on a numbers table with 20 entries compared with selecting all the records from the original table. This was done with oracle.



There are over 300k rows that need to be multiplied on average 22 times into table2... so yeah lol, might fry some components.


----------



## KimVapeDashian

Cespian said:


> Hey Guys
> 
> Hoping there are some SQL Gurus here. I need some assistance (MySQL specifically):
> 
> Context; I have 2 tables:
> 
> Table 1
> --------
> Customer_id | Entries
> 1 | 3
> 2 | 4
> 3 | 2
> 
> Basically, I need to duplicate the Customer_id in Table 2 for each Entry they have. Hence Table 2 should look like result set below after the insert:
> 
> Customer_id
> 1
> 1
> 1
> 2
> 2
> 2
> 2
> 3
> 3
> 
> How would I go about scripting this? Unfortunately I do not have the luxury of using another language (Like creating a Loop in PHP would have been perfect) as I have to do this via command line. Considering that there are 300K+ rows in Table1, manually typing INSERT INTO table2 VALUES X entries is impossible. And yes, I am aware that Table2 will most likely exceed a billion rows after the insert is complete (I have over 40Gb available).
> 
> Another issue I may have is that I will be doing this on a production server and may lock up Table1, but I will create a backup of the table first and work from the backup.
> 
> Thanks in advance



Hi @Cespian,

At the moment, your Customer_ID is your PK for table 1 i assume?

For the new table, what will be your new PK?

I have an idea for you, but just need to see what the end plan for the structure is...

Reactions: Thanks 1


----------



## Cespian

KimVapeDashian said:


> Hi @Cespian,
> 
> At the moment, your Customer_ID is your PK for table 1 i assume?
> 
> For the new table, what will be your new PK?
> 
> I have an idea for you, but just need to see what the end plan for the structure is...



Hey Bud.

There will be no PK nor FK's on any of the tables. The customer_id in table1 is however indexed. It is not necessary for me to have an index or PK on Table2 due to the Customer_id being duplicated plenty times.

The reason I am doing this is to select a winner for a competition we are running at the moment. A customer gets an entry per item he/she purchases. So Table1 has unique customer_id's and a sum of items purchased during that time. Because it is expected that Table2 will be too large to export, the selection will happen internally (not by a 3rd party) because the file will definitely be too large to move around easily (and encryption will be a pain in the rear as well).

EDIT: The create code for the tables were as follows

DROP TABLE IF EXISTS tmp_cust_items;
CREATE TABLE tmp_cust_items
(
customer_id INT UNSIGNED,
items SMALLINT,
INDEX idx_cust_id (customer_id)
);

DROP TABLE IF EXISTS tmp_winner_selection;
CREATE TABLE tmp_winner_selection
(
customer_id
);

Nothing special 

Thank you for looking into it


----------



## KimVapeDashian

Cespian said:


> Hey Bud.
> 
> There will be no PK nor FK's on any of the tables. The customer_id in table1 is however indexed. It is not necessary for me to have an index or PK on Table2 due to the Customer_id being duplicated plenty times.
> 
> The reason I am doing this is to select a winner for a competition we are running at the moment. A customer gets an entry per item he/she purchases. So Table1 has unique customer_id's and a sum of items purchased during that time. Because it is expected that Table2 will be too large to export, the selection will happen internally (not by a 3rd party) because the file will definitely be too large to move around easily (and encryption will be a pain in the rear as well).
> 
> Thank you for looking into it



Okay I fully understand. I do have an awesome vba macro for excel, that did this - however, looking at your DB size I understand what you are looking for.

Ill be right back.

Reactions: Thanks 1


----------



## KimVapeDashian

@Cespian 

One last thing, are you using T-SQL or looking for a "purely" SQL solution?

The lack of loops, without T-SQL means that your "nested if" styled insert, would be repeated 10 times if, the max number of entries per person is 10... Where, a loop can go to the nth position.

If strictly SQL, i am thinking that the solution will require a 3rd "numbers only" table as SQL natively has no means to generate sequential numbers.


----------



## Cespian

KimVapeDashian said:


> @Cespian
> 
> One last thing, are you using T-SQL or looking for a "purely" SQL solution?
> 
> The lack of loops, without T-SQL means that your "nested if" styled insert, would be repeated 10 times if, the max number of entries per person is 10... Where, a loop can go to the nth position.
> 
> If strictly SQL, i am thinking that the solution will require a 3rd "numbers only" table as SQL natively has no means to generate sequential numbers.



If memory serves me correctly, T-SQL is native to Sybase and MSSQL. I am working on a MySQL Server. I know that most of the functions are basically the same. 
I need to go back to school lol, I have no clue how loops work in MySQL.

I am however able to pass the queries through MySQL Workbench or HeidiSQL or something and have the IDE warn me of any syntax errors or failed returns.


----------



## KimVapeDashian

Cespian said:


> If memory serves me correctly, T-SQL is native to Sybase and MSSQL. I am working on a MySQL Server. I know that most of the functions are basically the same.
> I need to go back to school lol, I have no clue how loops work in MySQL.
> 
> I am however able to pass the queries through MySQL Workbench or HeidiSQL or something and have the IDE warn me of any syntax errors or failed returns.



100% - I write in Microsoft SQL Server Studio so just need to ensure my solution is something you could work with... If its okay, I will play around when I get home and see what I can come up with for you?

Reactions: Winner 1


----------



## Cespian

KimVapeDashian said:


> 100% - I write in Microsoft SQL Server Studio so just need to ensure my solution is something you could work with... If its okay, I will play around when I get home and see what I can come up with for you?



Thanks bud, I really appreciate it. I will attempt @yuganp 's solution as well tonight but I doubt the cross joins solution will complete this transaction in anywhere under a month lol... but worth a shot. 

I need to complete this by next Friday so I have plenty time, no rush and no urgency, don't lose sleep over it. 

I worked on MSSQL exclusively as well until about 1.5 years ago when I moved to a different company... was terrible adjusting to MySQL (and the 2 Mongo instances we have here )... but ask me to do anything in MSSQLMS now and I will stare into an abyss lol.

Thanks again bud.


----------



## GadgetFreak

@Cespian 
Dont know if this will work but it may provide insight for a better solotion:

select table1.customer, 
sum(table1.entries) 
from 
table1 
order by 1 
group by 2 
into temp temptable1;

Where temptable1 will have two columns:
customer
total_entries

Reactions: Thanks 1


----------



## Cespian

GadgetFreak said:


> @Cespian
> Dont know if this will work but it may provide insight for a better solotion:
> 
> select table1.customer,
> sum(table1.entries)
> from
> table1
> order by 1
> group by 2
> into temp temptable1;
> 
> Where temptable1 will have two columns:
> customer
> total_entries



Sorry bud, I dont quite understand what you are doing in the ORDER BY and GROUP BY clauses. Could you please elaborate.

Thanks


----------



## kevkev

Maybe check SO for some help. A quick search gave me:
http://stackoverflow.com/questions/1745165/looping-over-result-sets-in-mysql

Reactions: Thanks 1


----------



## Stosta

I'm no SQL guru, but there are a couple of things I consider when faced with a difficult problem;


Using a pencil and exam pad to write the billion entries.
Delegating the task to someone else.
Rather look for new vape gear on the forums.
Quitting and becoming a dance instructor.
Hope that helps.

Reactions: Funny 3


----------



## Cespian

kevkev said:


> Maybe check SO for some help. A quick search gave me:
> http://stackoverflow.com/questions/1745165/looping-over-result-sets-in-mysql



Ok wow thats interesting. I can work with this and try to adapt it to run row by row and SET the value of the parameters before moving to the next row. 

I did check SO yesterday, and my search criteria must have been a bit wonky, but couldnt find a related solution. I will see how I can adapt this. Thanks bud.

Reactions: Like 1


----------



## Cespian

Stosta said:


> I'm no SQL guru, but there are a couple of things I consider when faced with a difficult problem;
> 
> 
> Using a pencil and exam pad to write the billion entries.
> Delegating the task to someone else.
> Rather look for new vape gear on the forums.
> Quitting and becoming a dance instructor.
> Hope that helps.



If I implement point 4, I could never implement point 3 (mostly because I suck at dancing).
Thanks for the tips... If I dont get this right by my deadline, I will consider these.

Reactions: Funny 1


----------



## Cespian

Stosta said:


> I'm no SQL guru



BTW, NoSQL is a thing. Dont put that on your Linkedin profile because you might be hunted for jobs you dont want.

Reactions: Funny 2


----------



## KimVapeDashian

Cespian said:


> BTW, NoSQL is a thing. Dont put that on your Linkedin profile because you might be hunted for jobs you dont want.



Yeah haha, but maybe @Stosta is a non relational data guru!

Reactions: Agree 1


----------



## Stosta

KimVapeDashian said:


> Yeah haha, but maybe @Stosta is a non relational data guru!


I can be if the price is right

Reactions: Funny 2


----------



## KimVapeDashian

Stosta said:


> I can be if the price is right



I sometimes remind myself of what a desparate human I am...

"Would I do 'this' for R10M?" - Yes
"Would I do 'this' for R5M?" - Yes
"Would I do 'this' for R1M?" - Yes



I have come to realize that even R5k questions my morals !

Reactions: Funny 1 | Can relate 1


----------



## Cespian

KimVapeDashian said:


> I sometimes remind myself of what a desparate human I am...
> 
> "Would I do 'this' for R10M?" - Yes
> "Would I do 'this' for R5M?" - Yes
> "Would I do 'this' for R1M?" - Yes
> 
> 
> 
> I have come to realize that even R5k questions my morals !



Very simple statement for that attitude:
CASE WHEN money_involved IS NOT NULL THEN 'do it!' ELSE 'fokkof' END AS thats_life_brah


----------



## GadgetFreak

Cespian said:


> Sorry bud, I dont quite understand what you are doing in the ORDER BY and GROUP BY clauses. Could you please elaborate.
> 
> Thanks


Group by will produce a sum of the number of entries and the order by will just order it by customer:
SO YOU GET:
CUSTOMER | TOTAL ENTRIES


Bummer I am doing it the other way IT WONT WORK SORRY!

Reactions: Thanks 1


----------



## Cespian

GadgetFreak said:


> Group by will produce a sum of the number of entries and the order by will just order it by customer:
> SO YOU GET:
> CUSTOMER | TOTAL ENTRIES
> 
> 
> Bummer I am doing it the other way IT WONT WORK SORRY!



Thanks anyway bud. Appreciate it.


----------



## yuganp

@Cespian 

The cross join query i gave does not take very long on oracle.

On one on data conversions i previously did, doing select on on the customer table took 18s. When running the crossjoin with a numbers table of 20 numbers the time taken to retrieve 85s. running it with 10 numbers took 44s.

Reactions: Thanks 1


----------



## Cespian

yuganp said:


> @Cespian
> 
> The cross join query i gave does not take very long on oracle.
> 
> On one on data conversions i previously did, doing select on on the customer table took 18s. When running the crossjoin with a numbers table of 20 numbers the time taken to retrieve 85s. running it with 10 numbers took 44s.



Thanks for that explanation bud. Consider my bare minimum of 300,000 "numbers" at a guestimate of 1 second per row, results in 83 hours... but will definitely put this to the test tonight


----------



## Christos

Cespian said:


> Hey Guys
> 
> Hoping there are some SQL Gurus here. I need some assistance (MySQL specifically):
> 
> Context; I have 2 tables:
> 
> Table 1
> --------
> Customer_id | Entries
> 1 | 3
> 2 | 4
> 3 | 2
> 
> Basically, I need to duplicate the Customer_id in Table 2 for each Entry they have. Hence Table 2 should look like result set below after the insert:
> 
> Customer_id
> 1
> 1
> 1
> 2
> 2
> 2
> 2
> 3
> 3
> 
> How would I go about scripting this? Unfortunately I do not have the luxury of using another language (Like creating a Loop in PHP would have been perfect) as I have to do this via command line. Considering that there are 300K+ rows in Table1, manually typing INSERT INTO table2 VALUES X entries is impossible. And yes, I am aware that Table2 will most likely exceed a billion rows after the insert is complete (I have over 40Gb available).
> 
> Another issue I may have is that I will be doing this on a production server and may lock up Table1, but I will create a backup of the table first and work from the backup.
> 
> Thanks in advance


Please provide table 2.
If table 2 has 3 lines for customerID 1 then it should be a simple left outer join.


----------



## Cespian

Christos said:


> Please provide table 2.
> If table 2 has 3 lines for customerID 1 then it should be a simple left outer join.



Hey Chistos

There is no data in Table2 bud. My problem is that I am trying to populate Table2 with data effectively. Example of how Table2 should look after a viable INSERT mechanism is written, is provided in the OP. 

As explained to KimVapeDashian: Table1 holds a unique customer id and a count of items bought per customer id. I need to insert the customer id as many times as the quantity of items/entries as stated in Table1 into Table2 (which is on average about 22 times). The reason I cannot simply write individual INSERT statements is because Table1 holds 300k+ records. 

Hope that makes sense.
Thanks


----------



## Silver

My head hurts
Good luck there @Cespian !
Hope you don't fry any components

I thought I had Excel issues - until I read this thread...

Reactions: Like 1


----------



## Christos

Cespian said:


> Hey Chistos
> 
> There is no data in Table2 bud. My problem is that I am trying to populate Table2 with data effectively. Example of how Table2 should look after a viable INSERT mechanism is written, is provided in the OP.
> 
> As explained to KimVapeDashian: Table1 holds a unique customer id and a count of items bought per customer id. I need to insert the customer id as many times as the quantity of items/entries as stated in Table1 into Table2 (which is on average about 22 times). The reason I cannot simply write individual INSERT statements is because Table1 holds 300k+ records.
> 
> Hope that makes sense.
> Thanks


can you do:
select Entries from Table 1
group by entries

If your entries go from 1 to 20(I.E Not too many of these) I have an idea that will save you buttloads of time but requires a "setup" of a table.


----------



## Christos

I basically want to know how many different entries there are e.g. 1,2,3,4,5,6 so im eliminating the duplicates because im sure 3 will be duplicated about 1000 times etc.


----------



## Christos

Silver said:


> My head hurts
> Good luck there @Cespian !
> Hope you don't fry any components
> 
> I thought I had Excel issues - until I read this thread...


Your headache is my bread and butter and vape gear 
Might explain why my head hurts most days!

Reactions: Agree 1


----------



## Cespian

Christos said:


> can you do:
> select Entries from Table 1
> group by entries
> 
> If your entries go from 1 to 20(I.E Not too many of these) I have an idea that will save you buttloads of time but requires a "setup" of a table.



Sure... that ill have to do tomorrow. I only brought a 5k sample home. Thanks bud.


----------



## Cespian

Silver said:


> My head hurts
> Good luck there @Cespian !
> Hope you don't fry any components
> 
> I thought I had Excel issues - until I read this thread...



Haha @Silver as @Christos mentioned, its bread and butter for us. Im pretty sure I would be lost as hell in your area of expertise. 

Just btw, if you require Excel help, please hijack this thread

Reactions: Thanks 1


----------



## Christos

@Cespian, the logical approach here will get you a query that runs for hours.
If


Cespian said:


> Sure... that ill have to do tomorrow. I only brought a 5k sample home. Thanks bud.





Cespian said:


> Sure... that ill have to do tomorrow. I only brought a 5k sample home. Thanks bud.


Should take a few seconds to try it on your test data provided the data is randomised and not the first 5k records


----------



## Cespian

Christos said:


> @Cespian, the logical approach here will get you a query that runs for hours.
> If
> 
> 
> Should take a few seconds to try it on your test data provided the data is randomised and not the first 5k records



It is the first 5000 records and ordered by entries ASC  so basically the numbers are ranging from 1 to 46. The only number missing in this sequence is 34


----------



## Christos

Sounds promising. 
I'll give you assistance once junior is sleeping. Can't really do much without him jumping on me right now.

Reactions: Thanks 1


----------



## Cespian

Christos said:


> Sounds promising.
> I'll give you assistance once junior is sleeping. Can't really do much without him jumping on me right now.



No urgency bud. Im leaving for the airport as soon as I get my juniors to bed as well. So will be out of commission until at least 11ish. Got until Friday next week to come up with a solution


----------



## Christos

Cespian said:


> No urgency bud. Im leaving for the airport as soon as I get my juniors to bed as well. So will be out of commission until at least 11ish. Got until Friday next week to come up with a solution


Create a table with the entries I. E entry amount.
So it will look like
ENTRY AMOUNT
1
2
2
3
3
3
4
4
4
4

You can create a loop to do this for you.
Should run in under 5 mins to create the table. 

Then do a left join in the table with Table1 on entry amount = client entries

This should give you what you are looking for and 300K records in a select join should take under a minute if not 10 seconds. 

If you want help writing the query let me know.

Reactions: Thanks 1


----------



## KimVapeDashian

Christos said:


> Create a table with the entries I. E entry amount.
> So it will look like
> ENTRY AMOUNT
> 1
> 2
> 2
> 3
> 3
> 3
> 4
> 4
> 4
> 4
> 
> You can create a loop to do this for you.
> Should run in under 5 mins to create the table.
> 
> Then do a left join in the table with Table1 on entry amount = client entries
> 
> This should give you what you are looking for and 300K records in a select join should take under a minute if not 10 seconds.
> 
> If you want help writing the query let me know.



I was about to report back here on a similar method  Worked in test sample space... Although my "loop" was just an excel macro to populate my number table... Well done @Christos

Reactions: Like 1 | Thanks 1


----------



## KimVapeDashian

Btw here is my adapted excel macro, to replicate the numbers


Sub KimCopy()
Dim lRow As Long
Dim RepeatFactor As Variant

lRow = 1
Do While (Cells(lRow, "A") <> "")

RepeatFactor = Cells(lRow, "A")
If ((RepeatFactor > 1) And IsNumeric(RepeatFactor)) Then

Range(Cells(lRow, "A"), Cells(lRow, "A")).Copy
Range(Cells(lRow + 1, "A"), Cells(lRow + RepeatFactor - 1, "A")).Select
Selection.Insert Shift:=xlDown

lRow = lRow + RepeatFactor - 1
End If

lRow = lRow + 1
Loop
End Sub


Populating column A, with numbers 1 - 6 as a start



This is then the outcome of the macro data

Reactions: Like 1 | Thanks 1


----------



## Christos

KimVapeDashian said:


> Btw here is my adapted excel macro, to replicate the numbers
> 
> 
> Sub KimCopy()
> Dim lRow As Long
> Dim RepeatFactor As Variant
> 
> lRow = 1
> Do While (Cells(lRow, "A") <> "")
> 
> RepeatFactor = Cells(lRow, "A")
> If ((RepeatFactor > 1) And IsNumeric(RepeatFactor)) Then
> 
> Range(Cells(lRow, "A"), Cells(lRow, "A")).Copy
> Range(Cells(lRow + 1, "A"), Cells(lRow + RepeatFactor - 1, "A")).Select
> Selection.Insert Shift:=xlDown
> 
> lRow = lRow + RepeatFactor - 1
> End If
> 
> lRow = lRow + 1
> Loop
> End Sub
> 
> 
> Populating column A, with numbers 1 - 6 as a start
> View attachment 53919
> 
> 
> This is then the outcome of the macro data
> View attachment 53920



If only I liked or knew how to use excel 
I usually import excel into MSSQL and do everything from there.
I guess its just years of SQL experience that make TSQL more appealing and "simpler".

Reactions: Like 1


----------



## KimVapeDashian

@Christos 

I hear you 100%!!

In my line of work, I work far more in Excel and often pull data from my DB. Excel is bae for me!

Reactions: Can relate 1


----------



## Cespian

Christos said:


> Create a table with the entries I. E entry amount.
> So it will look like
> ENTRY AMOUNT
> 1
> 2
> 2
> 3
> 3
> 3
> 4
> 4
> 4
> 4
> 
> You can create a loop to do this for you.
> Should run in under 5 mins to create the table.
> 
> Then do a left join in the table with Table1 on entry amount = client entries
> 
> This should give you what you are looking for and 300K records in a select join should take under a minute if not 10 seconds.
> 
> If you want help writing the query let me know.



Hey guys, 

Sorry for the late response... been stuck in meetings
sorry for my doffness, but how would I script the loop in SQL? I am very much familiar on how to do this in VBA and PHP... If I understand correctly:

1. Create "entry_amount" table
2. Insert each "entries" value into "entry_amount" table multiplied by itself. eg. If customer_id 1 has 3 entries, and customer_id 16 has 3 entries, the number 3 needs to appear 6 times in the "entry_amount" table
3. LEFT OUTER JOIN table1.entries on entry_amount.entries

The entries in Table1 are not unique, only the customer_id is unique. If the entries were unique, this would work seemlessly. Going to paint a clearer picture below:

Snippet of Table1 below (data not real but accurate representation):




As you can see, both Customer 1 and Customer 4 has 4 entries, and Customer 3 and Customer 5 has 2 entries each. 

Even if I expand the entries to look like this:




And do a LEFT JOIN with TABLE 1, SQL will bring only the first found match, meaning that; if there were 10 customers with 4 entries, the first Customer will be selected for each unique entry. Whereas I need to see the following:




Please slap me if I am reading your suggestion incorrectly.

Thanks


----------



## Christos

Cespian said:


> Hey guys,
> 
> Sorry for the late response... been stuck in meetings
> sorry for my doffness, but how would I script the loop in SQL? I am very much familiar on how to do this in VBA and PHP... If I understand correctly:
> 
> 1. Create "entry_amount" table
> 2. Insert each "entries" value into "entry_amount" table multiplied by itself. eg. If customer_id 1 has 3 entries, and customer_id 16 has 3 entries, the number 3 needs to appear 6 times in the "entry_amount" table
> 3. LEFT OUTER JOIN table1.entries on entry_amount.entries
> 
> The entries in Table1 are not unique, only the customer_id is unique. If the entries were unique, this would work seemlessly. Going to paint a clearer picture below:
> 
> Snippet of Table1 below (data not real but accurate representation):
> 
> View attachment 53971
> 
> 
> As you can see, both Customer 1 and Customer 4 has 4 entries, and Customer 3 and Customer 5 has 2 entries each.
> 
> Even if I expand the entries to look like this:
> 
> View attachment 53972
> 
> 
> And do a LEFT JOIN with TABLE 1, SQL will bring only the first found match, meaning that; if there were 10 customers with 4 entries, the first Customer will be selected for each unique entry. Whereas I need to see the following:
> 
> View attachment 53973
> 
> 
> Please slap me if I am reading your suggestion incorrectly.
> 
> Thanks


Simple select from customer first and then left outer join to entries. 
All of the customers will join on whatever matches.

I'll give you a loop example later.running out for a bit.

Reactions: Thanks 1


----------



## Cespian

KimVapeDashian said:


> @Christos
> 
> I hear you 100%!!
> 
> In my line of work, I work far more in Excel and often pull data from my DB. Excel is bae for me!



Over the years, I noticed that Excel is very misunderstood. First mistake is that people use it as a DB (I have honestly discovered one with 15 sheets, holding roughly 600K rows and at least 5 columns each)... It is honestly one of the most versatile tools I've ever discovered if used correctly. I do design work, presentations, change delimiters in txt files, schedule reports (to name a few) from Excel. I love it. Every single one of the objects below is created in Excel:




So yeah, Excel is Bae for me too!

Reactions: Agree 1 | Winner 1


----------



## KimVapeDashian

Cespian said:


> Over the years, I noticed that Excel is very misunderstood. First mistake is that people use it as a DB (I have honestly discovered one with 15 sheets, holding roughly 600K rows and at least 5 columns each)... It is honestly one of the most versatile tools I've ever discovered if used correctly. I do design work, presentations, change delimiters in txt files, schedule reports (to name a few) from Excel. I love it. Every single one of the objects below is created in Excel:
> 
> View attachment 53977
> 
> 
> So yeah, Excel is Bae for me too!



I love what you have done there! Really nice use...

And, yes - Excel is a far more powerful tool than people understand (i still learn new things every day)... Ive also seen these "database" spreadsheets, and their slow executions. External connections, and clean queries really make the world of difference...

We need to have a coffee and vape!

Reactions: Like 1


----------



## KimVapeDashian

Next time im in CT or you come up to JHB 

Just saw location.


----------



## Cespian

KimVapeDashian said:


> Next time im in CT or you come up to JHB
> 
> Just saw location.



Haha was actually responding about location now.

Thanks bud, will let you know (rare probability though) if I come up that way again.

Reactions: Thanks 1


----------



## KimVapeDashian

Back on topic,

Im headed out now, and will give you my loop script for SMS2008 **when i get home**. @Christos might be reply sooner~


*Edit for being rushed

Reactions: Thanks 1


----------



## Christos

set@i=0;
set@Max = select max(Entry) from Table;
while@i<@Max do
set@x=0;
while@x<@i do
insert into randomTable (Entity)
select @x;
set@x=@x+1;
endwhile;
set@i=@i+1;
endwhile;

Reactions: Thanks 1


----------



## Christos

then it should be 
Select * 
from client c 
left outer join randomTable r on c.entry = r.entry

Reactions: Thanks 1


----------



## Cespian

Christos said:


> set@i=0;
> set@Max = select max(Entry) from Table;
> while@i<@Max do
> set@x=0;
> while@x<@i do
> insert into randomTable (Entity)
> select @x;
> set@x=@x+1;
> endwhile;
> set@i=@i+1;
> endwhile;



Thanks a mil @Christos will adapt and test hopefully during this morning. Need to first migrate MS Access DB's that the Ops people thought was a good idea, over to a Proper DB 

Really appreciate all the help!

Reactions: Thanks 1


----------



## Christos

Cespian said:


> Thanks a mil @Christos will adapt and test hopefully during this morning. Need to first migrate MS Access DB's that the Ops people thought was a good idea, over to a Proper DB
> 
> Really appreciate all the help!


Don't forget to drop create if exists first and set max to 10 first to see if it works.

Reactions: Thanks 1


----------



## Cespian

Christos said:


> Don't forget to drop create if exists first and set max to 10 first to see if it works.



Thanks, will be using a 1K snippet of the table to test - so it wont be too resource intensive (and will make sure to ORDER BY RAND() instead of Entries ASC this time ).

Thanks bud

Reactions: Like 1


----------



## Christos

How did the query go @Cespian ?


----------



## Cespian

Christos said:


> How did the query go @Cespian ?



Ah sorry, forgot to give feedback... created a CRON and its running daily. Actually got some assistance from the IT dudes, and created the insert loop in PHP (much cleaner). I also edited the original table slighly by using a unique basket/transaction_id instead of unique customer_id's, with a "processed" column, so that each day, the cron job creates thousands of insert statements where "processed" IS NULL. The initial loop took just over 1H40 and Table2 grew to just over 2Bn rows. Ran a COUNT(*) yesterday afternoon and it was already on 3.6Bn. Its a mission remoting into this machine so I will check again on Friday. This campaign is running until end of June so yeah... expecting a good couple of Gb's worth of data in there. All I said was Good Luck to the auditors lol. Thanks again for your help bro.

Reactions: Winner 1 | Thanks 1


----------



## Christos

Cespian said:


> Ah sorry, forgot to give feedback... created a CRON and its running daily. Actually got some assistance from the IT dudes, and created the insert loop in PHP (much cleaner). I also edited the original table slighly by using a unique basket/transaction_id instead of unique customer_id's, with a "processed" column, so that each day, the cron job creates thousands of insert statements where "processed" IS NULL. The initial loop took just over 1H40 and Table2 grew to just over 2Bn rows. Ran a COUNT(*) yesterday afternoon and it was already on 3.6Bn. Its a mission remoting into this machine so I will check again on Friday. This campaign is running until end of June so yeah... expecting a good couple of Gb's worth of data in there. All I said was Good Luck to the auditors lol. Thanks again for your help bro.


Thats a buttload of data. I limit my skills to about 50 million records then I need serious hardware with SSD drives to make magic happen!

Reactions: Like 1


----------



## Cespian

Christos said:


> Thats a buttload of data. I limit my skills to about 50 million records then I need serious hardware with SSD drives to make magic happen!



Thats what dreams are made of man... a Server loaded with SSD's. I have that issue when switching between my local instance (on my notebook) with SSD's to the replication machines

Reactions: Agree 1


----------

