first_imgListen at WEAA Live Stream: http://amber.streamguys.com.4020/live.m3uIn wake of the backlash against him after seemingly fawning over Donald Trump, Under Armour CEO Kevin Plank has written an open letter to the city of Baltimore. We’ll examine the substance of the letter with a man who knows Plank well, former Baltimore City Councilman Carl Stokes. Plus, the Mod Squad, Taya Graham and Stephen Janis of The Real News Network report from Annapolis on the efforts at bail reform.These stories and much more on AFRO’s First Edition with Sean Yoes, Monday through Friday, 5-7 p.m.last_img

first_imgUnivision terminated its foray into the pureplay digital-media realm with the sale of Gizmodo Media Group and The Onion earlier this month — for a fire-sale price significantly less than $135 million the broadcaster paid for only the former Gawker Media properties two and half years ago.The new owners of the suite of websites are private-equity firm Great Hill Partners and Jim Spanfeller, a longtime media executive, who serves as CEO of the newly formed G/O Media and owns a minority stake in the company.Spanfeller, in an interview with Variety, said he remains very bullish on G/O Media’s ability to build a thriving digital venture. And, he said, there are no plans currently to make significant layoffs at the company, which has just over 400 employees, as it separates from Univision. “We don’t plan to cut our way to growth,” Spanfeller said.That said, Spanfeller said G/O Media will be “looking to run things more efficiently” and improve the company’s cost structure by providing “more direction that has been lacking” — referring to Gizmodo Media Group’s ownership under Univision. Related Twitter Expands Live-Streaming Video Lineup, Sets Content Deals With Viacom, ESPN, Live Nation, Univision, and More YouTube’s ACE Family Signs With Univision Creator Network (EXCLUSIVE) ×Actors Reveal Their Favorite Disney PrincessesSeveral actors, like Daisy Ridley, Awkwafina, Jeff Goldblum and Gina Rodriguez, reveal their favorite Disney princesses. Rapunzel, Mulan, Ariel,Tiana, Sleeping Beauty and Jasmine all got some love from the Disney stars.More VideosVolume 0%Press shift question mark to access a list of keyboard shortcutsKeyboard Shortcutsplay/pauseincrease volumedecrease volumeseek forwardsseek backwardstoggle captionstoggle fullscreenmute/unmuteseek to %SPACE↑↓→←cfm0-9Next UpJennifer Lopez Shares How She Became a Mogul04:350.5x1x1.25×1.5x2xLive00:0002:1502:15center_img “We are confident we can make this a profitable, fast-growing business,” he said.For now, G/O Media isn’t planning to shut down any of the properties it acquired. Those are Gizmodo, Jalopnik, Jezebel, Deadspin, Lifehacker, Kotaku, Splinter and The Root; and The Onion, which includes its flagship satire publication, entertainment outlet A.V. Club, ClickHole and The Takeout.Worth noting is that according to the Writers Guild of America East, which represents 233 employees at the former Gizmodo Media Group and Onion, its members will continue to work under union-negotiated terms and conditions with Great Hill’s acquisition.In splitting from Univision, G/O Media is planning to move into new New York offices on May 4 — at 1540 Broadway in Times Square, in space sublet from Viacom — and is looking for an office in L.A.Spanfeller declined to comment on reports that the price tag on the Gizmodo Media Group/Onion sale was under $50 million, saying only, “We feel we got a very good deal.”In aggregate, the collection of properties reaches about 100 million unique monthly visitors, although that’s 70 million-80 million backing out third-party sites that are part of the G/O Media group ad network.“The more time we spent with the data the more excited we got,” Spanfeller said. Besides comprising a large audience, it also skews younger to provide better reach among consumers 18-34 than Vice, Vox, BuzzFeed or Group Nine, according to Spanfeller. “Then what was really interesting was how engaged they are with their audience — they’re not dependent on social media.”Spanfeller, former CEO of Forbes.com, most recently built Spanfeller Media Group, whose properties included The Daily Meal and The Active Times; in December 2016, he sold the company to Tribune Publishing Co. (then called Tronc).As part of standing up G/O Media as an independent entity, Spanfeller recently hired a chief technology officer and CFO (as the execs providing those functions for Univision’s Gizmodo Media Group are remaining with the Hispanic broadcaster). Spanfeller said G/O Media is currently looking to hire a chief talent officer to run HR. The company’s new CTO is Jesse Knight, who was CTO/CIO of Vice Media from 2012-17. He started his career at Solid Sender, a development and consulting practice he founded and ran after graduating from McGill University. Filling in the CFO spot is Tom Callahan, who most recently was CFO of BandLab Technologies (where he helped support media operations and manage its investment in Rolling Stone, now owned by Penske Media Corp., the parent company of Variety). He previously worked with Spanfeller at Forbes Media.Given the challenges for digital media players across the spectrum, can G/O Media make a go of it? The company’s websites have been running in the red: Gizmodo Media Group’s properties generated over $80 million in revenue in 2017 but lost $20 million, the Wall Street Journal reported.Spanfeller declined to discuss specifics of the company’s financials. But he insisted G/O Media can become a viable business on its own, without needing to engage in any M&A. “We are open to incremental things we can add to the company,” he said. “But I don’t think it’s a situation where you do bolt-ons or die – you do bolt-ons and then you can be more efficient on the back-office side.”Univision bought the Gawker assets in a bankruptcy auction (which didn’t include Gawker.com), after Gawker Media was sued into bankruptcy by Silicon Valley billionaire Peter Thiel. Univision acquired a 40% stake in The Onion in January 2017 for an undisclosed amount.Separately, Gawker is slated to relaunch this year under Bustle Digital Group, after CEO Bryan Goldberg paid $1.35 million for the media gossip blog. Popular on Variety last_img read more

first_imgIn this article, we will see how convolutional layers work and how to use them. We will also see how you can build your own convolutional neural network in Keras to build better, more powerful deep neural networks and solve computer vision problems. We will also see how we can improve this network using data augmentation. For a better understanding of the concepts, we will be taking a well-known dataset CIFAR-10. This dataset was created by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The following article has been taken from the book Deep Learning Quick Reference, written by Mike Bernico. Adding inputs to the network The CIFAR-10 dataset is made up of 60,000 32 x 32 color images that belong to 10 classes, with 6,000 images per class. We’ll be using 50,000 images as a training set, 5,000 images as a validation set, and 5,000 images as a test set. The input tensor layer for the convolutional neural network will be (N, 32, 32, 3), which we will pass to the build_network function. The following code is used to build the network: def build_network(num_gpu=1, input_shape=None): inputs = Input(shape=input_shape, name=”input”) Getting the output The output of this model will be a class prediction, from 0-9. We will use a 10-node softmax.  We will use the following code to define the output: output = Dense(10, activation=”softmax”, name=”softmax”)(d2) Cost function and metrics Earlier, we used categorical cross-entropy as the loss function for a multi-class classifier.  This is just another multiclass classifier and we can continue using categorical cross-entropy as our loss function, and accuracy as a metric. We’ve moved on to using images as input, but luckily our cost function and metrics remain unchanged. Working with convolutional layers We’re going to use two convolutional layers, with batch normalization, and max pooling. This is going to require us to make quite a few choices, which of course we could choose to search as hyperparameters later. It’s always better to get something working first though. As the popular computer scientist and mathematician Donald Knuth would say, premature optimization is the root of all evil. We will use the following code snippet to define the two convolutional blocks: # convolutional block 1conv1 = Conv2D(64, kernel_size=(3,3), activation=”relu”, name=”conv_1″)(inputs)batch1 = BatchNormalization(name=”batch_norm_1″)(conv1)pool1 = MaxPooling2D(pool_size=(2, 2), name=”pool_1″)(batch1) # convolutional block 2conv2 = Conv2D(32, kernel_size=(3,3), activation=”relu”, name=”conv_2″)(pool1)batch2 = BatchNormalization(name=”batch_norm_2″)(conv2)pool2 = MaxPooling2D(pool_size=(2, 2), name=”pool_2″)(batch2) So, clearly, we have two convolutional blocks here, that consist of a convolutional layer, a batch normalization layer, and a pooling layer. In the first block, I’m using 64 3 x 3 filters with relu activations. I’m using valid (no) padding and a stride of 1. Batch normalization doesn’t require any parameters and it isn’t really trainable. The pooling layer is using 2 x 2 pooling windows, valid padding, and a stride of 2 (the dimension of the window). The second block is very much the same; however, I’m halving the number of filters to 32. While there are many knobs we could turn in this architecture, the one I would tune first is the kernel size of the convolutions. Kernel size tends to be an important choice. In fact, some modern neural network architectures such as Google’s inception, allow us to use multiple filter sizes in the same convolutional layer. Getting the fully connected layers After two rounds of convolution and pooling, our tensors have gotten relatively small and deep. After pool_2, the output dimension is (n, 6, 6, 32). We have, in these convolutional layers, hopefully extracted relevant image features that this 6 x 6 x 32 tensor represents. To classify images, using these features, we will connect this tensor to a few fully connected layers, before we go to our final output layer. In this example, I’ll use a 512-neuron fully connected layer, a 256-neuron fully connected layer, and finally, the 10-neuron output layer. I’ll also be using dropout to help prevent overfitting, but only a very little bit! The code for this process is given as follows for your reference: from keras.layers import Flatten, Dense, Dropout# fully connected layersflatten = Flatten()(pool2)fc1 = Dense(512, activation=”relu”, name=”fc1″)(flatten)d1 = Dropout(rate=0.2, name=”dropout1″)(fc1)fc2 = Dense(256, activation=”relu”, name=”fc2″)(d1)d2 = Dropout(rate=0.2, name=”dropout2″)(fc2) I haven’t previously mentioned the flatten layer above. The flatten layer does exactly what its name suggests. It flattens the n x 6 x 6 x 32 tensor into an n x 1152 vector. This will serve as an input to the fully connected layers. Working with multi-GPU models in Keras Many cloud computing platforms can provision instances that include multiple GPUs. As our models grow in size and complexity you might want to be able to parallelize the workload across multiple GPUs. This can be a somewhat involved process in native TensorFlow, but in Keras, it’s just a function call. Build your model, as normal, as shown in the following code: model = Model(inputs=inputs, outputs=output) Then, we just pass that model to keras.utils.multi_gpu_model, with the help of the following code: model = multi_gpu_model(model, num_gpu) In this example, num_gpu is the number of GPUs we want to use. Training the model Putting the model together, and incorporating our new cool multi-GPU feature, we come up with the following architecture: def build_network(num_gpu=1, input_shape=None): inputs = Input(shape=input_shape, name=”input”) # convolutional block 1conv1 = Conv2D(64, kernel_size=(3,3), activation=”relu”, name=”conv_1″)(inputs)batch1 = BatchNormalization(name=”batch_norm_1″)(conv1)pool1 = MaxPooling2D(pool_size=(2, 2), name=”pool_1″)(batch1) # convolutional block 2conv2 = Conv2D(32, kernel_size=(3,3), activation=”relu”, name=”conv_2″)(pool1)batch2 = BatchNormalization(name=”batch_norm_2″)(conv2)pool2 = MaxPooling2D(pool_size=(2, 2), name=”pool_2″)(batch2) # fully connected layersflatten = Flatten()(pool2)fc1 = Dense(512, activation=”relu”, name=”fc1″)(flatten)d1 = Dropout(rate=0.2, name=”dropout1″)(fc1)fc2 = Dense(256, activation=”relu”, name=”fc2″)(d1)d2 = Dropout(rate=0.2, name=”dropout2″)(fc2)# output layeroutput = Dense(10, activation=”softmax”, name=”softmax”)(d2) # finalize and compilemodel = Model(inputs=inputs, outputs=output)if num_gpu > 1:model = multi_gpu_model(model, num_gpu)model.compile(optimizer=’adam’, loss=’categorical_crossentropy’, metrics=[“accuracy”])return model We can use this to build our model: model = build_network(num_gpu=1, input_shape=(IMG_HEIGHT, IMG_WIDTH, CHANNELS)) And then we can fit it, as you’d expect: model.fit(x=data[“train_X”], y=data[“train_y”], batch_size=32, epochs=200, validation_data=(data[“val_X”], data[“val_y”]), verbose=1, callbacks=callbacks) As we train this model, you will notice that overfitting is an immediate concern. Even with a relatively modest two convolutional layers, we’re already overfitting a bit. You can see the effects of overfitting from the following graphs: It’s no surprise, 50,000 observations is not a lot of data, especially for a computer vision problem. In practice, computer vision problems benefit from very large datasets. In fact, Chen Sun showed that additional data tends to help computer vision models linearly with the log of the data volume in https://arxiv.org/abs/1707.02968. Unfortunately, we can’t really go find more data in this case. But maybe we can make some. Let’s talk about data augmentation next. Using data augmentation Data augmentation is a technique where we apply transformations to an image and use both the original image and the transformed images to train on. Imagine we had a training set with a cat in it: If we were to apply a horizontal flip to this image, we’d get something that looks like this: This is exactly the same image, of course, but we can use both the original and transformation as training examples. This isn’t quite as good as two separate cats in our training set; however, it does allow us to teach the computer that a cat is a cat regardless of the direction it’s facing. In practice, we can do a lot more than just a horizontal flip. We can vertically flip, when it makes sense, shift, and randomly rotate images as well. This allows us to artificially amplify our dataset and make it seem bigger than it is. Of course, you can only push this so far, but it’s a very powerful tool in the fight against overfitting when little data exists. What is the Keras ImageDataGenerator? Not so long ago, the only way to do image augmentation was to code up the transforms and apply them randomly to the training set, saving the transformed images to disk as we went (uphill, both ways, in the snow). Luckily for us, Keras now provides an ImageDataGenerator class that can apply transformations on the fly as we train, without having to hand code the transformations. We can create a data generator object from ImageDataGenerator by instantiating it like this: def create_datagen(train_X): data_generator = ImageDataGenerator( rotation_range=20, width_shift_range=0.02, height_shift_range=0.02, horizontal_flip=True) data_generator.fit(train_X) return data_generator In this example, I’m using both shifts, rotation, and horizontal flips. I’m using only very small shifts. Through experimentation, I found that larger shifts were too much and my network wasn’t actually able to learn anything. Your experience will vary as your problem does, but I would expect larger images to be more tolerant of shifting. In this case, we’re using 32 pixel images, which are quite small. Training with a generator If you haven’t used a generator before, it works like an iterator. Every time you call the ImageDataGenerator .flow() method, it will produce a new training minibatch, with random transformations applied to the images it was fed. The Keras Model class comes with a .fit_generator() method that allows us to fit with a generator rather than a given dataset: model.fit_generator(data_generator.flow(data[“train_X”], data[“train_y”], batch_size=32), steps_per_epoch=len(data[“train_X”]) // 32, epochs=200, validation_data=(data[“val_X”], data[“val_y”]), verbose=1, callbacks=callbacks) Here, we’ve replaced the traditional x and y parameters with the generator. Most importantly, notice the steps_per_epoch parameter. You can sample with replacement any number of times from the training set, and you can apply random transformations each time. This means that we can use more mini batches each epoch than we have data. Here, I’m going to only sample as many batches as I have observations, but that isn’t required. We can and should push this number higher if we can. Before we wrap things up, let’s look at how beneficial image augmentation is in this case: As you can see, just a little bit of image augmentation really helped us out. Not only is our overall accuracy higher, but our network is overfitting much slower. If you have a computer vision problem with just a little bit of data, image augmentation is something you’ll want to do. We saw the benefits and ease of training a convolutional neural network from scratch using Keras and then improving that network using data augmentation. If you found the above article to be useful, make sure you check out the book Deep Learning Quick Reference for more information on modeling and training various different types of deep neural networks with ease and efficiency. Read Next Top 5 Deep Learning Architectures CapsNet: Are Capsule networks the antidote for CNNs kryptonite? What is a CNN?last_img read more

first_imgJust when Google is facing large walkouts and protests against its policies, another consumer group has lodged a complaint against Google’s user tracking. According to a report published by the European Consumer Organisation (BEUC), Google is using various methods to encourage users to enable the settings ‘location history’ and ‘web and app activity’ which are integrated into all Google user accounts. They allege that Google is using these features to facilitate targeted advertising. BEUC and its members including those from the Czech Republic, Greece, Norway, Slovenia, and Sweden argue that what Google is doing is in breach of the GDPR. Per the report, BEUC says “We argue that consumers are deceived into being tracked when they use Google services. This happens through a variety of techniques, including withholding or hiding information, deceptive design practices, and bundling of services. We argue that these practices are unethical, and that they in our opinion are in breach of European data protection legislation because they fail to fulfill the conditions for lawful data processing.” Android users are generally unaware of the fact that their Location History or Web & App Activity is enabled. Google uses a variety of dark patterns, to collect the exact location of the user, including the latitude (e.g. floor of the building) and mode of transportation, both outside and inside, to serve targeted advertising. Moreover, there is no real option to turn off Location History, only to pause it. Even if the user has kept Location History disabled, their location will still be shared with Google through Web & App Activity. “If you pause Location history, we make clear that — depending on your individual phone and app settings — we might still collect and use location data to improve your Google experience.” said a Google spokesman to Reuters. “These practices are not compliant with the General Data Protection Regulation (GDPR), as Google lacks a valid legal ground for processing the data in question. In particular, the report shows that users’ consent provided under these circumstances is not freely given,” BEUC, speaking on behalf of the countries’ consumer groups, said. Google claims to have a legitimate interest in serving ads based on personal data, but the fact that location data is collected, and how it is used, is not clearly expressed to the user. BEUC calls out Google saying that the company’s legitimate interest in serving advertising as part of its business model overrides the data subject’s fundamental right to privacy. BEUC argues that in light of how Web & App Activity is presented to users, the interests of the data subject should take precedence. Reuters asked for comment on the consumer groups’ complaints to a Google spokesman. According to them, “Location History is turned off by default, and you can edit, delete, or pause it at any time. If it’s on, it helps to improve services like predicted traffic on your commute. We’re constantly working to improve our controls, and we’ll be reading this report closely to see if there are things we can take on board,”. People are largely supportive of BEUC on the allegations they made on Google. However, some people feel that it is just another attack on Google. If people voluntarily and most of them knowingly use these services and consent to giving personal information, it should not be a concern for any third party. “I can’t help but think that there’s some competitors’ money behind these attacks on Google. They provide location services which you can turn off or delete yourself, which is anonymous to anyone else, and there’s no evidence they sell your data (they just anonymously connect you to businesses you search for). Versus carriers which track you without an option to opt-in or out and actually do sell your data to 3rd parties.” “If the vast majority of customers don’t know arithmetic, then yes, that’s exactly what happened. Laws are a UX problem, not a theory problem. If most of your users end up getting deceived, you can’t say “BUT IT WAS ALL RIGHT THERE IN THE SMALL PRINT, IT’S NOT MY FAULT THEY DIDN’T READ IT!”. Like, this is literally how everything else works.” Read the full conversation on Hacker news. You may also go through the full “Every step you take” report published by BEUC for more information. Read Next Google employees join hands with Amnesty International urging Google to drop Project Dragonfly. Is Anti-trust regulation coming to Facebook following fake news inquiry made by a global panel in the House of Commons, UK? Google hints shutting down Google News over EU’s implementation of Article 11 or the “link tax”last_img read more