first_imgNot very often in recent times have we had credible cause to heap praises on the West Indies Cricket Board (WICB). The success of the Under-19 team presents us with one such precious moment. Conversely, the splintered criticisms of the board, pointing to the inadequacy of preparation of the triumphant team, seems spurious, irrational and lacking credibility. Since the objective of preparing any sporting team for competition is for that team to be victorious, in the advent that the team is victorious, there can be no guarantees that having prepared the team differently the team would still have been victorious. It is by that general principle that these particular criticisms of the WICB should be rubbished. If the WICB and the coaching staff had it to do all over again with the same set of players, it would be foolhardy for them to do anything significantly different. The WICB president, Mr Dave Cameron, speaking on the arrival of the three Jamaican players in the squad, quite rightly took credit for the part the board played in the selection and preparation of the team. Mr Cameron pointed to the fact that at least five members of the team are already playing professionally and that the core of the team was selected as far back as 2014 and actually competed in the regional 50-over competition in the very same year. REGULAR TRAINING CAMPS Subsequent to that, there were regular training camps leading into the tournament, with the preparation culminating in a three-match warm-up series against the host nation of the tournament, Bangladesh. The genesis of these criticisms, I suspect, emanated from the relatively sparse number of warm-up games the team played leading into the World Cup compared top teams such as India, who played consistently together for two years and were unbeaten coming into the tournament. Bangladesh, we were told, played closer to a dozen warm-up games and were red hot early in the tournament, as were the Indians. The West Indies emphatically destroyed the myth of perfection that relates to the preparation of both India and Bangladesh by beating both when it mattered most. It is, therefore, quite plausible that the West Indies’ preparations were better than that of both Bangladesh and India. The West Indies team was the sharpest team mentally in the tournament, as evidenced by those two huge tournament changing moments, starting with that crucial run out against Zimbabwe, followed by the big stumping of the Indian star batsman in the final. SHARP, TALENTED Not only were they sharp mentally, they are talented, they were motivated and they appeared to get fitter and sharper as the tournament progressed, while the more fancied teams, with their so-called superior preparation, faded and fizzed at the business end of the tournament. The silly assumption being made is that because India and Bangladesh played 20 or 30 warm-up games between them they were better prepared. That is obviously not necessarily so. There is always the risk of overworking and burning out the players, plus there are cultural differences that must be considered. West Indians are naturally stronger and more natural athletes and perhaps need less physical drilling and more psychological work. The success of this West Indies team might very well serve to redefine the way teams at this level are prepared for competition, with less physical and game sessions and more mental and psychological preparedness. The victorious players, coaching staff, as well as the WICB leadership should all be congratulated for executing plans and preparations that in the end were proven to be perfect by the fact that the West Indies Under-19 team lifted the ultimate prize.last_img read more

first_img Facebook Ticketmaster’s Verified Fan Blocks Ticket Bots ticketmasters-verified-fan-successfully-blocks-ticket-bots Ticketmaster’s Verified Fan Successfully Blocks Ticket Bots Twitter Email News NETWORK ERRORCannot Contact ServerRELOAD YOUR SCREEN OR TRY SELECTING A DIFFERENT VIDEO May 15, 2017 – 1:50 am Watch: Adele wins Album Of The Year for ’25’ The program to block ticket bots and brokers from buying tickets shows a 90 percent success rateRenée FabianGRAMMYs Sep 8, 2017 – 6:12 pm The battle against ticket bots has plagued the ticket sales industry, and especially the live music business. But Live Nation’s Ticketmaster seems to be making strides with their Verified Fan program. Verified Fan separates actual music fans from bots and scalpers by allowing fans to pre-register to order tickets. Once the system verifies an actual human is looking to purchase tickets as opposed to a bot, the fan receives a pre-registration code that “unlocks the opportunity to purchase tickets.”According to Live Nation CEO Michael Rapino, they’ve had a 90 percent success rate so far, which is crucial for fans who want to land a ticket to see their favorite artists. The demand is always higher than the number of tickets available. For example, more than 10 million Adele fans attempted to buy tickets for her 2016 tour, but only 400,000 seats were available.As demand for live shows continues to increase, blocking bots is good for both fans and business alike.”There’s probably not been an industry with compounded growth as consistent as the live music business,” Rapino said. “Live concerts continually ranked in the top three things that a consumer wants to attend every year as their escape.”Apple Secures Deal With Warner Music Grouplast_img read more

first_imgIll-gotten money. Illustration: Prothom Alo Experts have questioned the move by the Anti Corruption Commission not to file cases against anybody who returns ill-gotten wealth.Although there is no law or provision in the country to this end, the ACC has made this move, aiming to retrieve Tk 140 million.Experts said that such measures will encourage the corruption.According to ACC, under this process, it made a breakthrough by retrieving Tk 130 million from Hallmark group.Besides, victims have got back Tk 9.83 million in 10 different cases through the mediation of ACC. In context of corruption, though, this amount is negligible.ACC chairman Iqbal Mahmood also admits there are legal questions in this regard.Speaking to Prothom Alo, Iqbal Mahmood said, “We will take measures based on circumstances pertaining to each case. We have to analyse the cost benefits. We have to take reality into consideration, as well as consider legality and ethics.”Executive director of Transparency International Bangladesh Ifthekharuzzaman said, “It is not acceptable either from an ethical or legal point of view. During 1/11 we opposed the idea of Truth Commission. Such a measure can’t act as a prevention of corruption after the institution fails to establish justice.”Former chairman of ACC Ghulam Rahman said although not permanently, a law named ‘plea bargain’ can be formulated. But, still there will be questions of legality.Nakshi Knit of Hallmark returned money in one of its three cases. Later, it decided not to return money. Two cases of Knit are underway.On 9 November, assistant manager of Sonali Bank Subrato Kumar Das told ACC director that it had received Tk 327.50 million due such action against Nakshi Knit.ACC lawyer Khurshid Alam Khan said although the process is questioned, ACC took a decision on Tk 130 million with Hallmark. Although it is not legal, ACC and the court gave consent on the matter. If the legality is challenged at the High Court, the decision will be cancelled.Despite a verdict by the Appellate Division in March last, the government did not return Tk 12.32 billion collected by the intelligence agencies during the army backed caretaker government in 2007. Besides, Tk 340 million collected by the Truth Commission is lying in the government exchequer.On condition of anonymity, an official at Bangladesh Bank (BB) said the central bank sent a letter to the government to implement a verdict on returning Tk 6.15 billion. But the central bank did not get any reply.Speaking to Prothom Alo, lawyer of BB Ameer-ul-Islam said a review petition has been filed against the verdict on returning Tk 6.15 billion collected from 40 businessmen. Preparations are on for the hearing.According to ACC, the anti-graft body sees some sort of success in recovering money embezzled by illegal means. The lion share of Tk 40 billion of BASIC Bank and Tk 26.86 billion of Hallmark is similar to embezzlement. Although these are loans in name, everything points to forgery.The ACC chairman said they have recovered Tk 11.48 billion from Hallmark and BASIC Bank, not by assuring that cases against them would be withdrawn. But it is true Tk 130 million was collected from Nakshi Knit of Hallmark in 2014 on a condition that cases will not be filed against them.ACC helps get back moneyA terminated cashier of the Dhaka Power Distribution Company Limited (DPDC) returned Tk 3.3 million to the company until 6 November. He confessed his involvement in taking bribe to a director general of ACC.A person gave Tk 5 million to a ruling Awami League member of parliament for investing in a business. But the MP simply kept the money. Later, ACC helped the victim get back Tk 2 million of the amount. Subir Kanti, an employee of a private organisation, gave more than Tk 1 million to a real estate company for purchasing a flat. The company, however, deceived him and did not hand over the flat. Later, Subir got back Tk 3 million in just 24 hours with the help of ACC.The office of the assistant commissioner of land (AC-Land) in Dhanmondi returned Tk 120,000 taken as a bribe for mutation of a flat as ACC intervened.The ACC also made a private dental college in Dhaka return Tk 400,000 taken as an advance for admissions.The anti-graft watchdog received a phone call recently claiming a Banani school was charging extra fees illegally. The headmaster of the school was made to return Tk 368,000 instantly by the ACC.According to ACC officials, the victims want to get back their money taken forcefully and unethically, alongside trial of the accused. When ACC Chairman Iqbal Mahmud was reminded that they are catching small fish, he told Prothom Alo, ”Commoners are affected most by small fish. But, when we’ll be capable, we’ll net the big fish too.”Sole exampleIn past 10 years, the government managed to take back a small amount of money from a public servant through the apex court. On 6 April 2016, ATM Nazim Ullah Chowdhury escaped impriosnment by submitting Tk 684,000 to the state treasury.Asked about poor state of restoring embezzled money, attorney general Mahbubey Alam told Prothom Alo that the corrupt try their utmost to delay, resist and derail the trial.A special kind of prosecution team is needed to quicken the trials, he added.Bribery in ACCThe anti-graft watchdog, ironically, had to terminate one and suspend two of its officials recently for taking bribes and indulging in corruption.The ACC suspended one of its additional directors for amassing wealth amounting to Tk 50 million beyond the known source of income.An investigation officer of ACC was fired as he lodged a case against one  person instead of 10. According to a report of Transparency International Bangladesh, the amount of bribe was Tk 54.4 billion in 2007 which rose to Tk 88.2 billion in 2015.TIB executive director Iftekharuzzaman said, ”Today is International Anti-Corruption Day. The government has decided to observe the day nationally.””Recently ruling Awami League general secretary Obaidul Quader said if politicians don’t get involved in corruption, corruption would be 50 per cent less. I can see a political will from his statement. To benefit from this, we have to avoid indulging corruption,” he added.*This report, originally published in Prothom Alo print edition, has been rewritten in English by Rabiul Islam and Imam Hossain.last_img read more

first_img Share Photo by Matt Sayles/Invision/APThe cast of “Moonlight” celebrates as “Moonlight” wins the best picture award at the Oscars on Sunday, Feb. 26, 2017, at the Dolby Theatre in Los Angeles.1. AT ACADEMY AWARDS, AN EPIC ERRORIn an apparently unprecedented mistake, the wrong winner is announced for best picture. Things are soon sorted out — and “Moonlight,” not “La La Land,” wins the Oscar.2. TERROR ‘WAGED IN THE NAME OF THE LORD’Former congregants say they were subjected to years of emotional and physical abuse inside an evangelical church in western North Carolina, an AP investigation reveals.3. WHERE US FORCES COULD BE HEADEDA new military strategy to meet Trump’s demand to “obliterate” the Islamic State group is likely to deepen American military involvement in Syria.4. IRANIANS CHEER CHOICE FOR BEST FOREIGN FILMHis countrymen are also lauding the decision by Asghar Farhadi, director of “The Salesman,” to boycott the Academy Awards because of the Trump administration’s travel ban.5. POLAND A POPULIST FRONT-RUNNERMonths before Britain voted to leave the EU Union or the U.S. elected Trump, Poles booted out their own political establishment.6. LITTLE CONSEQUENCE FOR DRUG MISDEEDSStaff at VA hospitals were fired or reprimanded in a small fraction of thousands of reported cases of opioid theft and missing prescriptions since 2010, according to government data provided to the AP.7. GUILTY PLEA EXPECTED IN AIR BAG SCANDALJapanese auto parts maker Takata is also expected in U.S. federal court to agree to a $1 billion penalty for concealing the deadly air bag inflator problem.8. WHO’S NOT LOWERING GUARD ON POTThe American Academy of Pediatrics is beefing up warnings about marijuana’s potential harms for teens amid increasingly lax laws and attitudes on pot use.9. KURT BUSCH USES LAST-LAP PASS FOR DAYTONA WINIt’s a victory of redemption for the driver, who was suspended by NASCAR two days before the 2015 Daytona 500 for off-track behavior.10. WHAT’S NEXT MOVE FOR BILL COSBYThe actor is set to return to a Pennsylvania courtroom to ask a judge to bring in outside jurors in his criminal sex assault case.last_img read more

first_img Share Hurricane Harvey was a recent example, Turner wrote. “The people of the state of Texas who choose to live in cities expect their cities to be able to respond in the event of the disaster, and cities will not be able to respond if legislation designed to strangle local governments succeeds.”This money-and-things business came up regularly during a debate almost two decades ago over whether to tax sales made over the internet. One member of a federal panel that looked into the question was then-Dallas Mayor Ron Kirk. He took the position that things bought online should be taxed the same way they’d be if the purchases were made in brick-and-mortar stores.“When you are sitting at home in your virtual world and . . . a fire breaks out, do you want us to send a virtual fire truck or a real big red fire truck?” he asked at one point.Everybody wants the big red truck at the virtual truck’s price.Money isn’t always at the center of state policy decisions, but it often overshadows whatever else lawmakers might be talking about.School finance conversations aren’t really about education, although education is the only reason for those conversations in the first place. It’s easier to talk about the state’s spending — whether it’s enough, whether it’s fairly distributed and so on — than to talk about how the quality of public schools here compares with that of schools in other states. Michael Stravato for The Texas TribuneHouston Mayor Sylvester Turner speaks to the Texas House of Representatives Committee on Appropriations during a hearing on Harvey relief funding in Houston on Monday, Oct. 2, 2017.The dispute between the state of Texas and local governments isn’t always about money, but that’s where the real fighting takes place.Reduced to its essence, the state is arguing about price. The locals are arguing about product. Prices — property taxes are the bane of the moment — are rising too quickly, the state contends, with voters cheering in the background. Those same voters, the local officials argue, are making demands for roads, hospitals, schools, police and other government services.“Things cost money,” Houston Mayor Sylvester Turner wrote last week in a blunt letter to a Senate committee working on property tax growth limits.center_img One widely touted ranking — the Quality Counts 2016 report from Education Week – put Texas in 42nd place overall. That’s a starting point for a product-quality argument.Another widely touted ranking is the Tax Foundation’s comparison of effective property taxes for homeowners by state. Texas ranked 6th highest nationally in 2016. That’s a pretty good starting point for price-sensitive Texans.That group got a boost earlier this year when Gov. Greg Abbott gathered a set of legislative proposals to limit property tax growth into a package for his re-election campaign. He didn’t propose lowering property taxes; in fact, he said in his proposal that the state needs to raise its share of public education costs, the better to keep local property taxes at bay.But it puts the governor in league with state lawmakers who want to limit how much local governments can raise property tax bills without voter approval. It’s true that the bills are rising fast enough to propel angry voters to town hall meetings and to the polls. Rising property taxes are a legitimate political issue, driven by what elected officials and aspiring officeholders hear from their constituents.The local governments are on the other side of this, however, hearing from those same voters about deficiencies in schools and roads, crime protection and whatnot.Sometimes, push comes to shove. In the days after Hurricane Harvey assaulted Texas, Turner and Abbott locked horns, with the mayor telling the governor that he would have to raise property taxes to cover storm costs unless the state could fork over some relief money fast. That episode ended with Abbott giving the city a $50 million check and Turner backing off his threatened plans for a tax hike.The game is still on, however. State Sen. Paul Bettencourt, R-Houston, has been loudly promoting limits on local government tax increases. It was a meeting of his Senate Select Committee on Property Tax Reform that prompted Turner’s “things cost money” letter.Bettencourt and others say rising property taxes are pushing the costs of owning homes out of some people’s reach and forcing long-term residents out of their houses.Local officials like Turner, a longtime state legislator before he was elected mayor of Houston, hear the siren Ron Kirk was listening to during the fight over internet taxes — the one on a big red fire truck.last_img read more

first_imgKolkata: The Maulana Azad College authorities have decided to postpone the Annual Inter-College Students’ Fest in the wake of unrest between two groups of students. Trouble brewed during organising the event.A clash had broken out in the college on March 7 and four students were injured. The annual event of the college was scheduled to be held on March 22 and 23. The college principal’s office issued a notification in this regard on Saturday night. “The situation in the college has been tense since Thursday and there have been sporadic instances of clashes. This has prompted us to postpone the event,” a senior college official said. Also Read – Bose & Gandhi: More similar than apart, says Sugata BoseA source in the college said the clash broke out between two student groups with each one intending to have control of the funds for the two-day event. The situation turned so violent that forces from two police stations —Taltala and New Market — had to intervene to bring situation under control. The practical examination of third-year students was going on at the college at the time of the clash and tension on the college premises created panic among the examinees. Also Read – Rs 13,000 crore investment to provide 2 lakh jobs: MamataIt may be mentioned that local MP Sudip Bandopadhyay had attended a programme in the college in the morning and had announced some funds from his MPLAD for construction of the ladies hostel at the college. Soon after Bandyopadhay left, the two groups of students picked up a fight with each other. The Trinamool Chatra Parishad unit in the college however attributed the violence to outsiders who were against organising the annual college event. The event was scheduled to be held after a gap of five years.last_img read more

first_imgIndira Gandhi National Centre for the Arts raised the curtain for ARTH – Art for the earth on July 5 at the IGNCA, CV Mess, Janpath, New Delhi.The chief guest for the inauguration ceremony was the State Minister of culture, environment, forest and climate change, Dr Mahesh Sharma.The exhibition which is up for display until October 22 – is the first its kind public art project on the environment by one of India’s leading contemporary artists – Manav Gupta. Also Read – Add new books to your shelfComprising of “Excavations in Hymns of Clay”– a suite of environmental art installations by the artist, weaving all of them with a story-line and poetry. ‘Arth’ is an evolving, site-specific and dynamic engagement.As a public art project, the artist has tried to deploy the quintessentially Indian potter’s produce of clay objects such as the earthen lamps (diyas), local cigar (chilam), earthen cups (kullar), with the idea to transform their individual identity into metaphors and idioms of sustainability, context, perception and treatment. Also Read – Over 2 hours screen time daily will make your kids impulsiveThe clay objects and other items displayed in the exhibition will stun the viewer with the artist’s originality of thought as he produces a cutting-edge contemporary language whose global vocabulary is derived from the “local”.Emotive content like that of an epic story, Manav’s statement is dipped gently into the essence of the Indian Vedic practices to subtly bring to light the repository of solutions that the ancient way of life could offer in today’s context of sustainable development and current issues around rivers like the Ganga. Whether it be the latest ‘Rain’ or the ‘River waterfront’ ‘Time Machine’, ‘Bee-hive Garden ‘, ‘River Bed of Love’, or the ‘Noah’s Ark’, the fragility of clay juxtaposed with the limitlessness of the “cup of life” question the paradigm of time and human engagement with it in today’s rapidly mechanized and constructed consumerist engagement with earth’s resources.The works, conceptualised, created and constructed by the artist while taking into consideration the venue – is a sensitive natural interface with the ambience, seeking to engage fresh and locally relevant dialogues and questions that audiences can have with the art and within themselves.last_img read more

first_imgIn this article, we will see how convolutional layers work and how to use them. We will also see how you can build your own convolutional neural network in Keras to build better, more powerful deep neural networks and solve computer vision problems. We will also see how we can improve this network using data augmentation. For a better understanding of the concepts, we will be taking a well-known dataset CIFAR-10. This dataset was created by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The following article has been taken from the book Deep Learning Quick Reference, written by Mike Bernico. Adding inputs to the network The CIFAR-10 dataset is made up of 60,000 32 x 32 color images that belong to 10 classes, with 6,000 images per class. We’ll be using 50,000 images as a training set, 5,000 images as a validation set, and 5,000 images as a test set. The input tensor layer for the convolutional neural network will be (N, 32, 32, 3), which we will pass to the build_network function. The following code is used to build the network: def build_network(num_gpu=1, input_shape=None): inputs = Input(shape=input_shape, name=”input”) Getting the output The output of this model will be a class prediction, from 0-9. We will use a 10-node softmax.  We will use the following code to define the output: output = Dense(10, activation=”softmax”, name=”softmax”)(d2) Cost function and metrics Earlier, we used categorical cross-entropy as the loss function for a multi-class classifier.  This is just another multiclass classifier and we can continue using categorical cross-entropy as our loss function, and accuracy as a metric. We’ve moved on to using images as input, but luckily our cost function and metrics remain unchanged. Working with convolutional layers We’re going to use two convolutional layers, with batch normalization, and max pooling. This is going to require us to make quite a few choices, which of course we could choose to search as hyperparameters later. It’s always better to get something working first though. As the popular computer scientist and mathematician Donald Knuth would say, premature optimization is the root of all evil. We will use the following code snippet to define the two convolutional blocks: # convolutional block 1conv1 = Conv2D(64, kernel_size=(3,3), activation=”relu”, name=”conv_1″)(inputs)batch1 = BatchNormalization(name=”batch_norm_1″)(conv1)pool1 = MaxPooling2D(pool_size=(2, 2), name=”pool_1″)(batch1) # convolutional block 2conv2 = Conv2D(32, kernel_size=(3,3), activation=”relu”, name=”conv_2″)(pool1)batch2 = BatchNormalization(name=”batch_norm_2″)(conv2)pool2 = MaxPooling2D(pool_size=(2, 2), name=”pool_2″)(batch2) So, clearly, we have two convolutional blocks here, that consist of a convolutional layer, a batch normalization layer, and a pooling layer. In the first block, I’m using 64 3 x 3 filters with relu activations. I’m using valid (no) padding and a stride of 1. Batch normalization doesn’t require any parameters and it isn’t really trainable. The pooling layer is using 2 x 2 pooling windows, valid padding, and a stride of 2 (the dimension of the window). The second block is very much the same; however, I’m halving the number of filters to 32. While there are many knobs we could turn in this architecture, the one I would tune first is the kernel size of the convolutions. Kernel size tends to be an important choice. In fact, some modern neural network architectures such as Google’s inception, allow us to use multiple filter sizes in the same convolutional layer. Getting the fully connected layers After two rounds of convolution and pooling, our tensors have gotten relatively small and deep. After pool_2, the output dimension is (n, 6, 6, 32). We have, in these convolutional layers, hopefully extracted relevant image features that this 6 x 6 x 32 tensor represents. To classify images, using these features, we will connect this tensor to a few fully connected layers, before we go to our final output layer. In this example, I’ll use a 512-neuron fully connected layer, a 256-neuron fully connected layer, and finally, the 10-neuron output layer. I’ll also be using dropout to help prevent overfitting, but only a very little bit! The code for this process is given as follows for your reference: from keras.layers import Flatten, Dense, Dropout# fully connected layersflatten = Flatten()(pool2)fc1 = Dense(512, activation=”relu”, name=”fc1″)(flatten)d1 = Dropout(rate=0.2, name=”dropout1″)(fc1)fc2 = Dense(256, activation=”relu”, name=”fc2″)(d1)d2 = Dropout(rate=0.2, name=”dropout2″)(fc2) I haven’t previously mentioned the flatten layer above. The flatten layer does exactly what its name suggests. It flattens the n x 6 x 6 x 32 tensor into an n x 1152 vector. This will serve as an input to the fully connected layers. Working with multi-GPU models in Keras Many cloud computing platforms can provision instances that include multiple GPUs. As our models grow in size and complexity you might want to be able to parallelize the workload across multiple GPUs. This can be a somewhat involved process in native TensorFlow, but in Keras, it’s just a function call. Build your model, as normal, as shown in the following code: model = Model(inputs=inputs, outputs=output) Then, we just pass that model to keras.utils.multi_gpu_model, with the help of the following code: model = multi_gpu_model(model, num_gpu) In this example, num_gpu is the number of GPUs we want to use. Training the model Putting the model together, and incorporating our new cool multi-GPU feature, we come up with the following architecture: def build_network(num_gpu=1, input_shape=None): inputs = Input(shape=input_shape, name=”input”) # convolutional block 1conv1 = Conv2D(64, kernel_size=(3,3), activation=”relu”, name=”conv_1″)(inputs)batch1 = BatchNormalization(name=”batch_norm_1″)(conv1)pool1 = MaxPooling2D(pool_size=(2, 2), name=”pool_1″)(batch1) # convolutional block 2conv2 = Conv2D(32, kernel_size=(3,3), activation=”relu”, name=”conv_2″)(pool1)batch2 = BatchNormalization(name=”batch_norm_2″)(conv2)pool2 = MaxPooling2D(pool_size=(2, 2), name=”pool_2″)(batch2) # fully connected layersflatten = Flatten()(pool2)fc1 = Dense(512, activation=”relu”, name=”fc1″)(flatten)d1 = Dropout(rate=0.2, name=”dropout1″)(fc1)fc2 = Dense(256, activation=”relu”, name=”fc2″)(d1)d2 = Dropout(rate=0.2, name=”dropout2″)(fc2)# output layeroutput = Dense(10, activation=”softmax”, name=”softmax”)(d2) # finalize and compilemodel = Model(inputs=inputs, outputs=output)if num_gpu > 1:model = multi_gpu_model(model, num_gpu)model.compile(optimizer=’adam’, loss=’categorical_crossentropy’, metrics=[“accuracy”])return model We can use this to build our model: model = build_network(num_gpu=1, input_shape=(IMG_HEIGHT, IMG_WIDTH, CHANNELS)) And then we can fit it, as you’d expect: model.fit(x=data[“train_X”], y=data[“train_y”], batch_size=32, epochs=200, validation_data=(data[“val_X”], data[“val_y”]), verbose=1, callbacks=callbacks) As we train this model, you will notice that overfitting is an immediate concern. Even with a relatively modest two convolutional layers, we’re already overfitting a bit. You can see the effects of overfitting from the following graphs: It’s no surprise, 50,000 observations is not a lot of data, especially for a computer vision problem. In practice, computer vision problems benefit from very large datasets. In fact, Chen Sun showed that additional data tends to help computer vision models linearly with the log of the data volume in https://arxiv.org/abs/1707.02968. Unfortunately, we can’t really go find more data in this case. But maybe we can make some. Let’s talk about data augmentation next. Using data augmentation Data augmentation is a technique where we apply transformations to an image and use both the original image and the transformed images to train on. Imagine we had a training set with a cat in it: If we were to apply a horizontal flip to this image, we’d get something that looks like this: This is exactly the same image, of course, but we can use both the original and transformation as training examples. This isn’t quite as good as two separate cats in our training set; however, it does allow us to teach the computer that a cat is a cat regardless of the direction it’s facing. In practice, we can do a lot more than just a horizontal flip. We can vertically flip, when it makes sense, shift, and randomly rotate images as well. This allows us to artificially amplify our dataset and make it seem bigger than it is. Of course, you can only push this so far, but it’s a very powerful tool in the fight against overfitting when little data exists. What is the Keras ImageDataGenerator? Not so long ago, the only way to do image augmentation was to code up the transforms and apply them randomly to the training set, saving the transformed images to disk as we went (uphill, both ways, in the snow). Luckily for us, Keras now provides an ImageDataGenerator class that can apply transformations on the fly as we train, without having to hand code the transformations. We can create a data generator object from ImageDataGenerator by instantiating it like this: def create_datagen(train_X): data_generator = ImageDataGenerator( rotation_range=20, width_shift_range=0.02, height_shift_range=0.02, horizontal_flip=True) data_generator.fit(train_X) return data_generator In this example, I’m using both shifts, rotation, and horizontal flips. I’m using only very small shifts. Through experimentation, I found that larger shifts were too much and my network wasn’t actually able to learn anything. Your experience will vary as your problem does, but I would expect larger images to be more tolerant of shifting. In this case, we’re using 32 pixel images, which are quite small. Training with a generator If you haven’t used a generator before, it works like an iterator. Every time you call the ImageDataGenerator .flow() method, it will produce a new training minibatch, with random transformations applied to the images it was fed. The Keras Model class comes with a .fit_generator() method that allows us to fit with a generator rather than a given dataset: model.fit_generator(data_generator.flow(data[“train_X”], data[“train_y”], batch_size=32), steps_per_epoch=len(data[“train_X”]) // 32, epochs=200, validation_data=(data[“val_X”], data[“val_y”]), verbose=1, callbacks=callbacks) Here, we’ve replaced the traditional x and y parameters with the generator. Most importantly, notice the steps_per_epoch parameter. You can sample with replacement any number of times from the training set, and you can apply random transformations each time. This means that we can use more mini batches each epoch than we have data. Here, I’m going to only sample as many batches as I have observations, but that isn’t required. We can and should push this number higher if we can. Before we wrap things up, let’s look at how beneficial image augmentation is in this case: As you can see, just a little bit of image augmentation really helped us out. Not only is our overall accuracy higher, but our network is overfitting much slower. If you have a computer vision problem with just a little bit of data, image augmentation is something you’ll want to do. We saw the benefits and ease of training a convolutional neural network from scratch using Keras and then improving that network using data augmentation. If you found the above article to be useful, make sure you check out the book Deep Learning Quick Reference for more information on modeling and training various different types of deep neural networks with ease and efficiency. Read Next Top 5 Deep Learning Architectures CapsNet: Are Capsule networks the antidote for CNNs kryptonite? What is a CNN?last_img read more