Dataset Viewer
Unnamed: 0
int64 0
541k
| Id
int64 6
8.39M
| CreatorUserId
int64 1
29.3M
| OwnerUserId
float64 368
29.3M
⌀ | OwnerOrganizationId
float64 2
5.18k
⌀ | CurrentDatasetVersionId
float64 58
13.2M
⌀ | CurrentDatasourceVersionId
float64 58
13.9M
⌀ | ForumId
int64 762
8.95M
| Type
stringclasses 1
value | CreationDate
stringlengths 19
19
| LastActivityDate
stringlengths 10
10
| TotalViews
int64 0
12.1M
| TotalDownloads
int64 0
995k
| TotalVotes
int64 0
54.8k
| TotalKernels
int64 0
7.61k
| Medal
float64 1
3
⌀ | MedalAwardDate
stringlengths 10
10
⌀ | DatasetId
int64 6
8.39M
| Description
stringlengths 1
365k
⌀ | Title
stringlengths 2
57
⌀ | Subtitle
stringlengths 1
168
⌀ | LicenseName
stringclasses 32
values | VersionNumber
float64 1
16.2k
⌀ | VersionChangesCount
int64 0
16.2k
| AllTags
stringclasses 48
values | AllTagsCount
int64 0
11
| NormViews
float64 0
1
| NormVotes
float64 0
1
| NormDownloads
float64 0
1
| NormKernels
float64 0
1
| CombinedScore
float64 0
0.74
| LogCombinedScore
float64 0
0.55
| Rank_CombinedScore
int64 1
242k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
648
| 310
| 14,069
| null | 1,160
| 23,498
| 23,502
| 1,838
|
Dataset
|
11/03/2016 13:21:36
|
02/06/2018
| 12,089,246
| 995,129
| 12,501
| 5,571
| 1
|
11/06/2019
| 310
|
Context
---------
It is important that credit card companies are able to recognize fraudulent credit card transactions so that customers are not charged for items that they did not purchase.
Content
---------
The dataset contains transactions made by credit cards in September 2013 by European cardholders.
This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions.
It contains only numerical input variables which are the result of a PCA transformation. Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data. Features V1, V2, ... V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. The feature 'Amount' is the transaction Amount, this feature can be used for example-dependant cost-sensitive learning. Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise.
Given the class imbalance ratio, we recommend measuring the accuracy using the Area Under the Precision-Recall Curve (AUPRC). Confusion matrix accuracy is not meaningful for unbalanced classification.
Update (03/05/2021)
---------
A simulator for transaction data has been released as part of the practical handbook on Machine Learning for Credit Card Fraud Detection - https://fraud-detection-handbook.github.io/fraud-detection-handbook/Chapter_3_GettingStarted/SimulatedDataset.html. We invite all practitioners interested in fraud detection datasets to also check out this data simulator, and the methodologies for credit card fraud detection presented in the book.
Acknowledgements
---------
The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection.
More details on current and past projects on related topics are available on [https://www.researchgate.net/project/Fraud-detection-5][1] and the page of the [DefeatFraud][2] project
Please cite the following works:
Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. [Calibrating Probability with Undersampling for Unbalanced Classification.][3] In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015
Dal Pozzolo, Andrea; Caelen, Olivier; Le Borgne, Yann-Ael; Waterschoot, Serge; Bontempi, Gianluca. [Learned lessons in credit card fraud detection from a practitioner perspective][4], Expert systems with applications,41,10,4915-4928,2014, Pergamon
Dal Pozzolo, Andrea; Boracchi, Giacomo; Caelen, Olivier; Alippi, Cesare; Bontempi, Gianluca. [Credit card fraud detection: a realistic modeling and a novel learning strategy,][5] IEEE transactions on neural networks and learning systems,29,8,3784-3797,2018,IEEE
Dal Pozzolo, Andrea [Adaptive Machine learning for credit card fraud detection][6] ULB MLG PhD thesis (supervised by G. Bontempi)
Carcillo, Fabrizio; Dal Pozzolo, Andrea; Le Borgne, Yann-Aël; Caelen, Olivier; Mazzer, Yannis; Bontempi, Gianluca. [Scarff: a scalable framework for streaming credit card fraud detection with Spark][7], Information fusion,41, 182-194,2018,Elsevier
Carcillo, Fabrizio; Le Borgne, Yann-Aël; Caelen, Olivier; Bontempi, Gianluca. [Streaming active learning strategies for real-life credit card fraud detection: assessment and visualization,][8] International Journal of Data Science and Analytics, 5,4,285-300,2018,Springer International Publishing
Bertrand Lebichot, Yann-Aël Le Borgne, Liyun He, Frederic Oblé, Gianluca Bontempi [Deep-Learning Domain Adaptation Techniques for Credit Cards Fraud Detection](https://www.researchgate.net/publication/332180999_Deep-Learning_Domain_Adaptation_Techniques_for_Credit_Cards_Fraud_Detection), INNSBDDL 2019: Recent Advances in Big Data and Deep Learning, pp 78-88, 2019
Fabrizio Carcillo, Yann-Aël Le Borgne, Olivier Caelen, Frederic Oblé, Gianluca Bontempi [Combining Unsupervised and Supervised Learning in Credit Card Fraud Detection ](https://www.researchgate.net/publication/333143698_Combining_Unsupervised_and_Supervised_Learning_in_Credit_Card_Fraud_Detection) Information Sciences, 2019
Yann-Aël Le Borgne, Gianluca Bontempi [Reproducible machine Learning for Credit Card Fraud Detection - Practical Handbook ](https://www.researchgate.net/publication/351283764_Machine_Learning_for_Credit_Card_Fraud_Detection_-_Practical_Handbook)
Bertrand Lebichot, Gianmarco Paldino, Wissam Siblini, Liyun He, Frederic Oblé, Gianluca Bontempi [Incremental learning strategies for credit cards fraud detection](https://www.researchgate.net/publication/352275169_Incremental_learning_strategies_for_credit_cards_fraud_detection), IInternational Journal of Data Science and Analytics
[1]: https://www.researchgate.net/project/Fraud-detection-5
[2]: https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/
[3]: https://www.researchgate.net/publication/283349138_Calibrating_Probability_with_Undersampling_for_Unbalanced_Classification
[4]: https://www.researchgate.net/publication/260837261_Learned_lessons_in_credit_card_fraud_detection_from_a_practitioner_perspective
[5]: https://www.researchgate.net/publication/319867396_Credit_Card_Fraud_Detection_A_Realistic_Modeling_and_a_Novel_Learning_Strategy
[6]: http://di.ulb.ac.be/map/adalpozz/pdf/Dalpozzolo2015PhD.pdf
[7]: https://www.researchgate.net/publication/319616537_SCARFF_a_Scalable_Framework_for_Streaming_Credit_Card_Fraud_Detection_with_Spark
[8]: https://www.researchgate.net/publication/332180999_Deep-Learning_Domain_Adaptation_Techniques_for_Credit_Cards_Fraud_Detection
|
Credit Card Fraud Detection
|
Anonymized credit card transactions labeled as fraudulent or genuine
|
Database: Open Database, Contents: Database Contents
| 3
| 3
| null | 0
| 1
| 0.228283
| 1
| 0.732544
| 0.740207
| 0.554004
| 1
|
8
| 19
| 1
| null | 7
| 420
| 420
| 997
|
Dataset
|
01/12/2016 00:33:31
|
02/06/2018
| 2,684,143
| 761,523
| 4,413
| 7,605
| 1
|
11/06/2019
| 19
|
The Iris dataset was used in R.A. Fisher's classic 1936 paper, [The Use of Multiple Measurements in Taxonomic Problems](http://rcs.chemometrics.ru/Tutorials/classification/Fisher.pdf), and can also be found on the [UCI Machine Learning Repository][1].
It includes three iris species with 50 samples each as well as some properties about each flower. One flower species is linearly separable from the other two, but the other two are not linearly separable from each other.
The columns in this dataset are:
- Id
- SepalLengthCm
- SepalWidthCm
- PetalLengthCm
- PetalWidthCm
- Species
[](https://www.kaggle.com/benhamner/d/uciml/iris/sepal-width-vs-length)
[1]: http://archive.ics.uci.edu/ml/
|
Iris Species
|
Classify iris plants into three species in this classic dataset
|
CC0: Public Domain
| 2
| 2
| null | 0
| 0.222027
| 0.080587
| 0.765251
| 1
| 0.516966
| 0.416712
| 2
|
24
| 228
| 1
| null | 7
| 482
| 482
| 1,652
|
Dataset
|
10/06/2016 18:31:56
|
02/06/2018
| 3,048,278
| 700,575
| 4,748
| 3,681
| 1
|
11/06/2019
| 228
|
## Context
This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to diagnostically predict whether or not a patient has diabetes, based on certain diagnostic measurements included in the dataset. Several constraints were placed on the selection of these instances from a larger database. In particular, all patients here are females at least 21 years old of Pima Indian heritage.
## Content
The datasets consists of several medical predictor variables and one target variable, `Outcome`. Predictor variables includes the number of pregnancies the patient has had, their BMI, insulin level, age, and so on.
## Acknowledgements
Smith, J.W., Everhart, J.E., Dickson, W.C., Knowler, W.C., & Johannes, R.S. (1988). [Using the ADAP learning algorithm to forecast the onset of diabetes mellitus][1]. *In Proceedings of the Symposium on Computer Applications and Medical Care* (pp. 261--265). IEEE Computer Society Press.
## Inspiration
Can you build a machine learning model to accurately predict whether or not the patients in the dataset have diabetes or not?
[1]: http://rexa.info/paper/04587c10a7c92baa01948f71f2513d5928fe8e81
|
Pima Indians Diabetes Database
|
Predict the onset of diabetes based on diagnostic measures
|
CC0: Public Domain
| 1
| 1
|
dentistry, drugs and medications, hospitals and treatment centers
| 3
| 0.252148
| 0.086704
| 0.704004
| 0.484024
| 0.38172
| 0.323329
| 3
|
19,307
| 434,238
| 1,571,785
| 1,571,785
| null | 2,654,038
| 2,698,094
| 446,914
|
Dataset
|
12/04/2019 05:57:54
|
12/04/2019
| 3,613,758
| 655,791
| 9,348
| 2,234
| 1
|
01/17/2020
| 434,238
|
### Other Platform's Datasets (Click on the logos to view)
>
[![alt text][1]][2] [![alt text][3]][4] [![alt text][5]][6] [![alt text][7]][8]
[1]: https://i.imgur.com/As0PMcL.jpg =75x20
[2]: https://www.kaggle.com/shivamb/netflix-shows
[3]: https://i.imgur.com/r5t3MpQ.jpg =75x20
[4]: https://www.kaggle.com/shivamb/amazon-prime-movies-and-tv-shows
[5]: https://i.imgur.com/4a4ZMuy.png =75x30
[6]: https://www.kaggle.com/shivamb/disney-movies-and-tv-shows
[7]: https://i.imgur.com/nCL8Skc.png?1 =75x32
[8]: https://www.kaggle.com/shivamb/hulu-movies-and-tv-shows
- [Amazon Prime Video Movies and TV Shows](https://www.kaggle.com/shivamb/amazon-prime-movies-and-tv-shows)
- [Disney+ Movies and TV Shows](https://www.kaggle.com/shivamb/disney-movies-and-tv-shows)
- [Netflix Prime Video Movies and TV Shows](https://www.kaggle.com/shivamb/netflix-shows)
- [Hulu Movies and TV Shows](https://www.kaggle.com/shivamb/hulu-movies-and-tv-shows)
### Netflix Movies and TV Shows
> **About this Dataset:** *[Netflix](https://en.wikipedia.org/wiki/Netflix) is one of the most popular media and video streaming platforms. They have over 8000 movies or tv shows available on their platform, as of mid-2021, they have over 200M Subscribers globally. This tabular dataset consists of listings of all the movies and tv shows available on Netflix, along with details such as - cast, directors, ratings, release year, duration, etc.*
Featured Notebooks: [Click Here to View Featured Notebooks](https://www.kaggle.com/shivamb/netflix-shows/discussion/279376)
Milestone: Oct 18th, 2021: [Most Upvoted Dataset on Kaggle by an Individual Contributor](https://www.kaggle.com/shivamb/netflix-shows/discussion/279377)
### Interesting Task Ideas
> 1. Understanding what content is available in different countries
> 2. Identifying similar content by matching text-based features
> 3. Network analysis of Actors / Directors and find interesting insights
> 4. Does Netflix has more focus on TV Shows than movies in recent years.
[Check my Other Datasets](https://www.kaggle.com/shivamb/datasets)
|
Netflix Movies and TV Shows
|
Listings of movies and tv shows on Netflix - Regularly Updated
|
CC0: Public Domain
| 5
| 5
| null | 0
| 0.298923
| 0.170705
| 0.659001
| 0.293754
| 0.355596
| 0.304241
| 4
|
5,818
| 1,442
| 519,516
| 519,516
| null | 8,172
| 8,172
| 4,272
|
Dataset
|
06/21/2017 21:36:28
|
02/06/2018
| 1,439,580
| 337,021
| 3,784
| 6,476
| 1
|
11/06/2019
| 1,442
|
### Context
After watching [Somm](http://www.imdb.com/title/tt2204371/) (a documentary on master sommeliers) I wondered how I could create a predictive model to identify wines through blind tasting like a master sommelier would. The first step in this journey was gathering some data to train a model. I plan to use deep learning to predict the wine variety using words in the description/review. The model still won't be able to taste the wine, but theoretically it could identify the wine based on a description that a sommelier could give. If anyone has any ideas on how to accomplish this, please post them!
### Content
This dataset contains three files:
- **winemag-data-130k-v2.csv** contains 10 columns and 130k rows of wine reviews.
- **winemag-data_first150k.csv** contains 10 columns and 150k rows of wine reviews.
- **winemag-data-130k-v2.json** contains 6919 nodes of wine reviews.
Click on the data tab to see individual file descriptions, column-level metadata and summary statistics.
### Acknowledgements
The data was scraped from [WineEnthusiast](http://www.winemag.com/?s=&drink_type=wine) during the week of June 15th, 2017. The code for the scraper can be found [here](https://github.com/zackthoutt/wine-deep-learning) if you have any more specific questions about data collection that I didn't address.
**UPDATE 11/24/2017**
After feedback from users of the dataset I scraped the reviews again on November 22nd, 2017. This time around I collected the title of each review, which you can parse the year out of, the tasters name, and the taster's Twitter handle. This should also fix the duplicate entry issue.
### Inspiration
I think that this dataset offers some great opportunities for sentiment analysis and other text related predictive models. My overall goal is to create a model that can identify the variety, winery, and location of a wine based on a description. If anyone has any ideas, breakthroughs, or other interesting insights/models please post them.
|
Wine Reviews
|
130k wine reviews with variety, location, winery, price, and description
|
CC BY-NC-SA 4.0
| 4
| 4
| null | 0
| 0.119079
| 0.0691
| 0.338671
| 0.851545
| 0.344599
| 0.296096
| 5
|
15,411
| 17,810
| 1,314,380
| 1,314,380
| null | 23,812
| 23,851
| 25,540
|
Dataset
|
03/22/2018 05:42:41
|
03/22/2018
| 2,863,109
| 547,127
| 7,159
| 3,303
| 1
|
11/06/2019
| 17,810
|
### Context
http://www.cell.com/cell/fulltext/S0092-8674(18)30154-5

Figure S6. Illustrative Examples of Chest X-Rays in Patients with Pneumonia, Related to Figure 6
The normal chest X-ray (left panel) depicts clear lungs without any areas of abnormal opacification in the image. Bacterial pneumonia (middle) typically exhibits a focal lobar consolidation, in this case in the right upper lobe (white arrows), whereas viral pneumonia (right) manifests with a more diffuse ‘‘interstitial’’ pattern in both lungs.
http://www.cell.com/cell/fulltext/S0092-8674(18)30154-5
### Content
The dataset is organized into 3 folders (train, test, val) and contains subfolders for each image category (Pneumonia/Normal). There are 5,863 X-Ray images (JPEG) and 2 categories (Pneumonia/Normal).
Chest X-ray images (anterior-posterior) were selected from retrospective cohorts of pediatric patients of one to five years old from Guangzhou Women and Children’s Medical Center, Guangzhou. All chest X-ray imaging was performed as part of patients’ routine clinical care.
For the analysis of chest x-ray images, all chest radiographs were initially screened for quality control by removing all low quality or unreadable scans. The diagnoses for the images were then graded by two expert physicians before being cleared for training the AI system. In order to account for any grading errors, the evaluation set was also checked by a third expert.
### Acknowledgements
Data: https://data.mendeley.com/datasets/rscbjbr9sj/2
License: [CC BY 4.0][1]
Citation: http://www.cell.com/cell/fulltext/S0092-8674(18)30154-5
![enter image description here][2]
### Inspiration
Automated methods to detect and classify human diseases from medical images.
[1]: https://creativecommons.org/licenses/by/4.0/
[2]: https://i.imgur.com/8AUJkin.png
|
Chest X-Ray Images (Pneumonia)
|
5,863 images, 2 categories
|
Other (specified in description)
| 2
| 2
| null | 0
| 0.236831
| 0.130732
| 0.549805
| 0.43432
| 0.337922
| 0.291118
| 6
|
5,111
| 284
| 462,330
| 462,330
| null | 618
| 618
| 1,788
|
Dataset
|
10/26/2016 08:17:30
|
02/06/2018
| 2,563,080
| 726,341
| 6,549
| 1,848
| 1
|
11/06/2019
| 284
|
This dataset contains a list of video games with sales greater than 100,000 copies. It was generated by a scrape of [vgchartz.com][1].
Fields include
* Rank - Ranking of overall sales
* Name - The games name
* Platform - Platform of the games release (i.e. PC,PS4, etc.)
* Year - Year of the game's release
* Genre - Genre of the game
* Publisher - Publisher of the game
* NA_Sales - Sales in North America (in millions)
* EU_Sales - Sales in Europe (in millions)
* JP_Sales - Sales in Japan (in millions)
* Other_Sales - Sales in the rest of the world (in millions)
* Global_Sales - Total worldwide sales.
The script to scrape the data is available at https://github.com/GregorUT/vgchartzScrape.
It is based on BeautifulSoup using Python.
There are 16,598 records. 2 records were dropped due to incomplete information.
[1]: http://www.vgchartz.com/
|
Video Game Sales
|
Analyze sales data from more than 16,500 games.
|
Unknown
| 2
| 2
| null | 0
| 0.212013
| 0.119592
| 0.729896
| 0.242998
| 0.326125
| 0.282261
| 7
|
7,521
| 180
| 711,301
| null | 7
| 408
| 408
| 1,547
|
Dataset
|
09/19/2016 20:27:05
|
02/06/2018
| 2,398,741
| 484,259
| 3,892
| 3,501
| 1
|
11/06/2019
| 180
|
Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image.
n the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].
This database is also available through the UW CS ftp server:
ftp ftp.cs.wisc.edu
cd math-prog/cpo-dataset/machine-learn/WDBC/
Also can be found on UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29
Attribute Information:
1) ID number
2) Diagnosis (M = malignant, B = benign)
3-32)
Ten real-valued features are computed for each cell nucleus:
a) radius (mean of distances from center to points on the perimeter)
b) texture (standard deviation of gray-scale values)
c) perimeter
d) area
e) smoothness (local variation in radius lengths)
f) compactness (perimeter^2 / area - 1.0)
g) concavity (severity of concave portions of the contour)
h) concave points (number of concave portions of the contour)
i) symmetry
j) fractal dimension ("coastline approximation" - 1)
The mean, standard error and "worst" or largest (mean of the three
largest values) of these features were computed for each image,
resulting in 30 features. For instance, field 3 is Mean Radius, field
13 is Radius SE, field 23 is Worst Radius.
All feature values are recoded with four significant digits.
Missing attribute values: none
Class distribution: 357 benign, 212 malignant
|
Breast Cancer Wisconsin (Diagnostic) Data Set
|
Predict whether the cancer is benign or malignant
|
CC BY-NC-SA 4.0
| 2
| 2
| null | 0
| 0.198419
| 0.071072
| 0.486629
| 0.460355
| 0.304119
| 0.265528
| 8
|
14,353
| 4,549
| 1,236,717
| 1,236,717
| null | 466,349
| 482,208
| 10,301
|
Dataset
|
11/13/2017 18:30:07
|
02/06/2018
| 2,095,869
| 285,151
| 5,778
| 4,819
| 1
|
11/06/2019
| 4,549
|
UPDATE: Source code used for collecting this data [released here](https://github.com/DataSnaek/Trending-YouTube-Scraper)
### Context
YouTube (the world-famous video sharing website) maintains a list of the [top trending videos](https://www.youtube.com/feed/trending) on the platform. [According to Variety magazine](http://variety.com/2017/digital/news/youtube-2017-top-trending-videos-music-videos-1202631416/), “To determine the year’s top-trending videos, YouTube uses a combination of factors including measuring users interactions (number of views, shares, comments and likes). Note that they’re not the most-viewed videos overall for the calendar year”. Top performers on the YouTube trending list are music videos (such as the famously virile “Gangam Style”), celebrity and/or reality TV performances, and the random dude-with-a-camera viral videos that YouTube is well-known for.
This dataset is a daily record of the top trending YouTube videos.
Note that this dataset is a structurally improved version of [this dataset](https://www.kaggle.com/datasnaek/youtube).
### Content
This dataset includes several months (and counting) of data on daily trending YouTube videos. Data is included for the US, GB, DE, CA, and FR regions (USA, Great Britain, Germany, Canada, and France, respectively), with up to 200 listed trending videos per day.
EDIT: Now includes data from RU, MX, KR, JP and IN regions (Russia, Mexico, South Korea, Japan and India respectively) over the same time period.
Each region’s data is in a separate file. Data includes the video title, channel title, publish time, tags, views, likes and dislikes, description, and comment count.
The data also includes a `category_id` field, which varies between regions. To retrieve the categories for a specific video, find it in the associated `JSON`. One such file is included for each of the five regions in the dataset.
For more information on specific columns in the dataset refer to the [column metadata](https://www.kaggle.com/datasnaek/youtube-new/data).
### Acknowledgements
This dataset was collected using the YouTube API.
### Inspiration
Possible uses for this dataset could include:
* Sentiment analysis in a variety of forms
* Categorising YouTube videos based on their comments and statistics.
* Training ML algorithms like RNNs to generate their own YouTube comments.
* Analysing what factors affect how popular a YouTube video will be.
* Statistical analysis over time.
For further inspiration, see the kernels on this dataset!
|
Trending YouTube Video Statistics
|
Daily statistics for trending YouTube videos
|
CC0: Public Domain
| 115
| 115
| null | 0
| 0.173366
| 0.105513
| 0.286547
| 0.633662
| 0.299772
| 0.262189
| 9
|
22,700
| 661,950
| 1,772,071
| 1,772,071
| null | 2,314,697
| 2,356,116
| 676,378
|
Dataset
|
05/18/2020 22:50:26
|
05/18/2020
| 960,872
| 80,622
| 54,761
| 106
| 1
|
05/24/2021
| 661,950
|
### Context
This dataset comes from this [spreadsheet](https://tinyurl.com/acnh-sheet), a comprehensive Item Catalog for Animal Crossing New Horizons (ACNH). As described by [Wikipedia](https://en.wikipedia.org/wiki/Animal_Crossing:_New_Horizons),
> ACNH is a life simulation game released by Nintendo for Nintendo Switch on March 20, 2020. It is the fifth main series title in the Animal Crossing series and, with 5 million digital copies sold, has broken the record for Switch title with most digital units sold in a single month. In New Horizons, the player assumes the role of a customizable character who moves to a deserted island. Taking place in real-time, the player can explore the island in a nonlinear fashion, gathering and crafting items, catching insects and fish, and developing the island into a community of anthropomorphic animals.
### Content
There are 30 csvs each listing various items, villagers, clothing, and other collectibles from the game. The data was collected by a dedicated group of AC fans who continue to collaborate and build this [spreadsheet](https://tinyurl.com/acnh-sheet) for public use. The database contains the original data and full list of contributors and raw data. At the time of writing, the only difference between the spreadsheet and this version is that the Kaggle version omits all columns with images of the items, but is otherwise identical.
### Acknowledgements
Thanks to every contributor listed on the [spreadsheet!](https://tinyurl.com/acnh-sheet) Please attribute this spreadsheet and group for any use of the data. They also have a Discord server linked in the spreadsheet in case you want to contact them.
|
Animal Crossing New Horizons Catalog
|
A comprehensive inventory of ACNH items, villagers, clothing, fish/bugs etc
|
CC0: Public Domain
| 3
| 3
| null | 0
| 0.079482
| 1
| 0.081017
| 0.013938
| 0.293609
| 0.257436
| 10
|
616
| 2,709
| 9,028
| 9,028
| null | 38,454
| 40,228
| 7,022
|
Dataset
|
09/27/2017 16:56:09
|
02/06/2018
| 578,396
| 193,627
| 1,679
| 6,845
| 1
|
08/19/2020
| 2,709
|
### Context
Melbourne real estate is BOOMING. Can you find the insight or predict the next big trend to become a real estate mogul... or even harder, to snap up a reasonably priced 2-bedroom unit?
### Content
This is a snapshot of a [dataset created by Tony Pino][1].
It was scraped from publicly available results posted every week from Domain.com.au. He cleaned it well, and now it's up to you to make data analysis magic. The dataset includes Address, Type of Real estate, Suburb, Method of Selling, Rooms, Price, Real Estate Agent, Date of Sale and distance from C.B.D.
### Notes on Specific Variables
Rooms: Number of rooms
Price: Price in dollars
Method: S - property sold; SP - property sold prior; PI - property passed in; PN - sold prior not disclosed; SN - sold not disclosed; NB - no bid; VB - vendor bid; W - withdrawn prior to auction; SA - sold after auction; SS - sold after auction price not disclosed. N/A - price or highest bid not available.
Type: br - bedroom(s); h - house,cottage,villa, semi,terrace; u - unit, duplex; t - townhouse; dev site - development site; o res - other residential.
SellerG: Real Estate Agent
Date: Date sold
Distance: Distance from CBD
Regionname: General Region (West, North West, North, North east ...etc)
Propertycount: Number of properties that exist in the suburb.
Bedroom2 : Scraped # of Bedrooms (from different source)
Bathroom: Number of Bathrooms
Car: Number of carspots
Landsize: Land Size
BuildingArea: Building Size
CouncilArea: Governing council for the area
### Acknowledgements
This is intended as a static (unchanging) snapshot of https://www.kaggle.com/anthonypino/melbourne-housing-market. It was created in September 2017. Additionally, homes with no Price have been removed.
[1]: https://www.kaggle.com/anthonypino/melbourne-housing-market
|
Melbourne Housing Snapshot
|
Snapshot of Tony Pino's Melbourne Housing Dataset
|
CC BY-NC-SA 4.0
| 5
| 5
| null | 0
| 0.047844
| 0.030661
| 0.194575
| 0.900066
| 0.293286
| 0.257186
| 11
|
36,092
| 551,982
| 2,931,338
| null | 3,737
| 3,756,201
| 3,810,704
| 565,591
|
Dataset
|
03/12/2020 20:05:08
|
03/12/2020
| 4,757,694
| 194,642
| 11,219
| 1,765
| 1
|
03/16/2020
| 551,982
|
### Dataset Description
In response to the COVID-19 pandemic, the White House and a coalition of leading research groups have prepared the COVID-19 Open Research Dataset (CORD-19). CORD-19 is a resource of over 1,000,000 scholarly articles, including over 400,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses. This freely available dataset is provided to the global research community to apply recent advances in natural language processing and other AI techniques to generate new insights in support of the ongoing fight against this infectious disease. There is a growing urgency for these approaches because of the rapid acceleration in new coronavirus literature, making it difficult for the medical research community to keep up.
### Call to Action
We are issuing a call to action to the world's artificial intelligence experts to develop text and data mining tools that can help the medical community develop answers to high priority scientific questions. The CORD-19 dataset represents the most extensive machine-readable coronavirus literature collection available for data mining to date. This allows the worldwide AI research community the opportunity to apply text and data mining approaches to find answers to questions within, and connect insights across, this content in support of the ongoing COVID-19 response efforts worldwide. There is a growing urgency for these approaches because of the rapid increase in coronavirus literature, making it difficult for the medical community to keep up.
A list of our initial key questions can be found under the **[Tasks](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge/tasks)** section of this dataset. These key scientific questions are drawn from the NASEM’s SCIED (National Academies of Sciences, Engineering, and Medicine’s Standing Committee on Emerging Infectious Diseases and 21st Century Health Threats) [research topics](https://www.nationalacademies.org/event/03-11-2020/standing-committee-on-emerging-infectious-diseases-and-21st-century-health-threats-virtual-meeting-1) and the World Health Organization’s [R&D Blueprint](https://www.who.int/blueprint/priority-diseases/key-action/Global_Research_Forum_FINAL_VERSION_for_web_14_feb_2020.pdf?ua=1) for COVID-19.
Many of these questions are suitable for text mining, and we encourage researchers to develop text mining tools to provide insights on these questions.
We are maintaining a summary of the [community's contributions](https://www.kaggle.com/covid-19-contributions). For guidance on how to make your contributions useful, we're maintaining a [forum thread](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge/discussion/138484) with the feedback we're getting from the medical and health policy communities.
### Prizes
Kaggle is sponsoring a *$1,000 per task* award to the winner whose submission is identified as best meeting the evaluation criteria. The winner may elect to receive this award as a charitable donation to COVID-19 relief/research efforts or as a monetary payment. More details on the prizes and timeline can be found on the [discussion post](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge/discussion/135826).
### Accessing the Dataset
We have made this dataset available on Kaggle. Watch out for [periodic updates](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge/discussion/137474).
The dataset is also hosted on [AI2's Semantic Scholar](https://pages.semanticscholar.org/coronavirus-research). And you can search the dataset using AI2's new [COVID-19 explorer](https://cord-19.apps.allenai.org/).
The licenses for each dataset can be found in the all _ sources _ metadata csv file.
### Acknowledgements

This dataset was created by the Allen Institute for AI in partnership with the Chan Zuckerberg Initiative, Georgetown University’s Center for Security and Emerging Technology, Microsoft Research, IBM, and the National Library of Medicine - National Institutes of Health, in coordination with The White House Office of Science and Technology Policy.
|
COVID-19 Open Research Dataset Challenge (CORD-19)
|
An AI challenge with AI2, CZI, MSR, Georgetown, NIH & The White House
|
Other (specified in description)
| 111
| 104
| null | 0
| 0.393548
| 0.204872
| 0.195595
| 0.232084
| 0.256525
| 0.22835
| 12
|
2,063
| 494,724
| 71,388
| 71,388
| null | 2,364,896
| 2,406,681
| 507,816
|
Dataset
|
01/30/2020 14:18:33
|
01/30/2020
| 2,553,840
| 468,447
| 6,282
| 1,742
| 1
|
02/02/2020
| 494,724
|
### Context
From [World Health Organization](https://www.who.int/emergencies/diseases/novel-coronavirus-2019) - On 31 December 2019, WHO was alerted to several cases of pneumonia in Wuhan City, Hubei Province of China. The virus did not match any other known virus. This raised concern because when a virus is new, we do not know how it affects people.
So daily level information on the affected people can give some interesting insights when it is made available to the broader data science community.
[Johns Hopkins University has made an excellent dashboard](https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6) using the affected cases data. Data is extracted from the google sheets associated and made available here.
Edited:
Now data is available as csv files in the [Johns Hopkins Github repository](https://github.com/CSSEGISandData/COVID-19). Please refer to the github repository for the [Terms of Use](https://github.com/CSSEGISandData/COVID-19/blob/master/README.md) details. Uploading it here for using it in Kaggle kernels and getting insights from the broader DS community.
### Content
2019 Novel Coronavirus (2019-nCoV) is a virus (more specifically, a coronavirus) identified as the cause of an outbreak of respiratory illness first detected in Wuhan, China. Early on, many of the patients in the outbreak in Wuhan, China reportedly had some link to a large seafood and animal market, suggesting animal-to-person spread. However, a growing number of patients reportedly have not had exposure to animal markets, indicating person-to-person spread is occurring. At this time, it’s unclear how easily or sustainably this virus is spreading between people - [CDC](https://www.cdc.gov/coronavirus/2019-ncov/about/index.html)
This dataset has daily level information on the number of affected cases, deaths and recovery from 2019 novel coronavirus. Please note that this is a time series data and so the number of cases on any given day is the cumulative number.
The data is available from 22 Jan, 2020.
### Column Description
Main file in this dataset is `covid_19_data.csv` and the detailed descriptions are below.
`covid_19_data.csv`
* Sno - Serial number
* ObservationDate - Date of the observation in MM/DD/YYYY
* Province/State - Province or state of the observation (Could be empty when missing)
* Country/Region - Country of observation
* Last Update - Time in UTC at which the row is updated for the given province or country. (Not standardised and so please clean before using it)
* Confirmed - Cumulative number of confirmed cases till that date
* Deaths - Cumulative number of of deaths till that date
* Recovered - Cumulative number of recovered cases till that date
`2019_ncov_data.csv`
This is older file and is not being updated now. Please use the `covid_19_data.csv` file
**Added two new files with individual level information**
`COVID_open_line_list_data.csv`
This file is obtained from [this link](https://docs.google.com/spreadsheets/d/1itaohdPiAeniCXNlntNztZ_oRvjh0HsGuJXUJWET008/edit#gid=0)
`COVID19_line_list_data.csv`
This files is obtained from [this link](https://docs.google.com/spreadsheets/d/e/2PACX-1vQU0SIALScXx8VXDX7yKNKWWPKE1YjFlWc6VTEVSN45CklWWf-uWmprQIyLtoPDA18tX9cFDr-aQ9S6/pubhtml)
**Country level datasets**
If you are interested in knowing country level data, please refer to the following Kaggle datasets:
**India** - https://www.kaggle.com/sudalairajkumar/covid19-in-india
**South Korea** - https://www.kaggle.com/kimjihoo/coronavirusdataset
**Italy** - https://www.kaggle.com/sudalairajkumar/covid19-in-italy
**Brazil** - https://www.kaggle.com/unanimad/corona-virus-brazil
**USA** - https://www.kaggle.com/sudalairajkumar/covid19-in-usa
**Switzerland** - https://www.kaggle.com/daenuprobst/covid19-cases-switzerland
**Indonesia** - https://www.kaggle.com/ardisragen/indonesia-coronavirus-cases
### Acknowledgements
* [Johns Hopkins University](https://github.com/CSSEGISandData/COVID-19) for making the data available for educational and academic research purposes
* MoBS lab - https://www.mobs-lab.org/2019ncov.html
* World Health Organization (WHO): https://www.who.int/
* DXY.cn. Pneumonia. 2020. http://3g.dxy.cn/newh5/view/pneumonia.
* BNO News: https://bnonews.com/index.php/2020/02/the-latest-coronavirus-cases/
* National Health Commission of the People’s Republic of China (NHC):
http://www.nhc.gov.cn/xcs/yqtb/list_gzbd.shtml
* China CDC (CCDC): http://weekly.chinacdc.cn/news/TrackingtheEpidemic.htm
* Hong Kong Department of Health: https://www.chp.gov.hk/en/features/102465.html
* Macau Government: https://www.ssm.gov.mo/portal/
* Taiwan CDC: https://sites.google.com/cdc.gov.tw/2019ncov/taiwan?authuser=0
* US CDC: https://www.cdc.gov/coronavirus/2019-ncov/index.html
* Government of Canada: https://www.canada.ca/en/public-health/services/diseases/coronavirus.html
* Australia Government Department of Health: https://www.health.gov.au/news/coronavirus-update-at-a-glance
* European Centre for Disease Prevention and Control (ECDC): https://www.ecdc.europa.eu/en/geographical-distribution-2019-ncov-cases
* Ministry of Health Singapore (MOH): https://www.moh.gov.sg/covid-19
* Italy Ministry of Health: http://www.salute.gov.it/nuovocoronavirus
Picture courtesy : [Johns Hopkins University dashboard](https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6)
### Inspiration
Some insights could be
1. Changes in number of affected cases over time
2. Change in cases over time at country level
3. Latest number of affected cases
|
Novel Corona Virus 2019 Dataset
|
Day level information on covid-19 affected cases
|
Data files © Original Authors
| 151
| 151
| null | 0
| 0.211249
| 0.114717
| 0.47074
| 0.22906
| 0.256441
| 0.228283
| 13
|
3,049
| 138
| 1,132,983
| null | 1,030
| 4,508
| 4,508
| 1,471
|
Dataset
|
08/30/2016 03:36:42
|
02/06/2018
| 2,086,741
| 470,707
| 4,140
| 2,289
| 1
|
11/06/2019
| 138
|
### Background
What can we say about the success of a movie before it is released? Are there certain companies (Pixar?) that have found a consistent formula? Given that major films costing over $100 million to produce can still flop, this question is more important than ever to the industry. Film aficionados might have different interests. Can we predict which films will be highly rated, whether or not they are a commercial success?
This is a great place to start digging in to those questions, with data on the plot, cast, crew, budget, and revenues of several thousand films.
### Data Source Transfer Summary
We (Kaggle) have removed the original version of this dataset per a [DMCA](https://en.wikipedia.org/wiki/Digital_Millennium_Copyright_Act) takedown request from IMDB. In order to minimize the impact, we're replacing it with a similar set of films and data fields from [The Movie Database (TMDb)](themoviedb.org) in accordance with [their terms of use](https://www.themoviedb.org/documentation/api/terms-of-use). The bad news is that kernels built on the old dataset will most likely no longer work.
The good news is that:
- You can port your existing kernels over with a bit of editing. [This kernel](https://www.kaggle.com/sohier/getting-imdb-kernels-working-with-tmdb-data/) offers functions and examples for doing so. You can also find [a general introduction to the new format here](https://www.kaggle.com/sohier/tmdb-format-introduction).
- The new dataset contains full credits for both the cast and the crew, rather than just the first three actors.
- Actor and actresses are now listed in the order they appear in the credits. It's unclear what ordering the original dataset used; for the movies I spot checked it didn't line up with either the credits order or IMDB's stars order.
- The revenues appear to be more current. For example, IMDB's figures for Avatar seem to be from 2010 and understate the film's global revenues by over $2 billion.
- Some of the movies that we weren't able to port over (a couple of hundred) were just bad entries. For example, [this IMDB entry](http://www.imdb.com/title/tt5289954/?ref_=fn_t...) has basically no accurate information at all. It lists Star Wars Episode VII as a documentary.
### Data Source Transfer Details
- Several of the new columns contain json. You can save a bit of time by porting the load data functions [from this kernel]().
- Even in simple fields like runtime may not be consistent across versions. For example, previous dataset shows the duration for Avatar's extended cut while TMDB shows the time for the original version.
- There's now a separate file containing the full credits for both the cast and crew.
- All fields are filled out by users so don't expect them to agree on keywords, genres, ratings, or the like.
- Your existing kernels will continue to render normally until they are re-run.
- If you are curious about how this dataset was prepared, the code to access TMDb's API is posted [here](https://gist.github.com/SohierDane/4a84cb96d220fc4791f52562be37968b).
New columns:
- homepage
- id
- original_title
- overview
- popularity
- production_companies
- production_countries
- release_date
- spoken_languages
- status
- tagline
- vote_average
Lost columns:
- actor_1_facebook_likes
- actor_2_facebook_likes
- actor_3_facebook_likes
- aspect_ratio
- cast_total_facebook_likes
- color
- content_rating
- director_facebook_likes
- facenumber_in_poster
- movie_facebook_likes
- movie_imdb_link
- num_critic_for_reviews
- num_user_for_reviews
### Open Questions About the Data
There are some things we haven't had a chance to confirm about the new dataset. If you have any insights, please let us know in the forums!
- Are the budgets and revenues all in US dollars? Do they consistently show the global revenues?
- This dataset hasn't yet gone through a data quality analysis. Can you find any obvious corrections? For example, in the IMDb version it was necessary to treat values of zero in the budget field as missing. Similar findings would be very helpful to your fellow Kagglers! (It's probably a good idea to keep treating zeros as missing, with the caveat that missing budgets much more likely to have been from small budget films in the first place).
### Inspiration
- Can you categorize the films by type, such as animated or not? We don't have explicit labels for this, but it should be possible to build them from the crew's job titles.
- How sharp is the divide between major film studios and the independents? Do those two groups fall naturally out of a clustering analysis or is something more complicated going on?
### Acknowledgements
This dataset was generated from [The Movie Database](themoviedb.org) API. This product uses the TMDb API but is not endorsed or certified by TMDb.
Their API also provides access to data on many additional movies, actors and actresses, crew members, and TV shows. You can [try it for yourself here](https://www.themoviedb.org/documentation/api).

|
TMDB 5000 Movie Dataset
|
Metadata on ~5,000 movies from TMDb
|
Other (specified in description)
| 2
| 1
| null | 0
| 0.172611
| 0.075601
| 0.473011
| 0.300986
| 0.255552
| 0.227576
| 14
|
10,236
| 11,167
| 907,764
| 907,764
| null | 15,520
| 15,520
| 18,557
|
Dataset
|
01/28/2018 08:44:24
|
02/06/2018
| 1,128,892
| 236,099
| 2,404
| 4,747
| 1
|
09/09/2020
| 11,167
|
### Context
Bob has started his own mobile company. He wants to give tough fight to big companies like Apple,Samsung etc.
He does not know how to estimate price of mobiles his company creates. In this competitive mobile phone market you cannot simply assume things. To solve this problem he collects sales data of mobile phones of various companies.
Bob wants to find out some relation between features of a mobile phone(eg:- RAM,Internal Memory etc) and its selling price. But he is not so good at Machine Learning. So he needs your help to solve this problem.
In this problem you do not have to predict actual price but a price range indicating how high the price is
|
Mobile Price Classification
|
Classify Mobile Price Range
|
Unknown
| 1
| 1
| null | 0
| 0.09338
| 0.0439
| 0.237255
| 0.624195
| 0.249682
| 0.222889
| 15
|
19,332
| 13,996
| 1,574,575
| 1,574,575
| null | 18,858
| 18,858
| 21,551
|
Dataset
|
02/23/2018 18:20:00
|
02/23/2018
| 2,529,709
| 426,389
| 3,217
| 2,056
| 1
|
11/06/2019
| 13,996
|
### Context
"Predict behavior to retain customers. You can analyze all relevant customer data and develop focused customer retention programs." [IBM Sample Data Sets]
### Content
Each row represents a customer, each column contains customer’s attributes described on the column Metadata.
**The data set includes information about:**
+ Customers who left within the last month – the column is called Churn
+ Services that each customer has signed up for – phone, multiple lines, internet, online security, online backup, device protection, tech support, and streaming TV and movies
+ Customer account information – how long they’ve been a customer, contract, payment method, paperless billing, monthly charges, and total charges
+ Demographic info about customers – gender, age range, and if they have partners and dependents
### Inspiration
To explore this type of models and learn more about the subject.
**New version from IBM:**
https://community.ibm.com/community/user/businessanalytics/blogs/steven-macko/2019/07/11/telco-customer-churn-1113
|
Telco Customer Churn
|
Focused customer retention programs
|
Data files © Original Authors
| 1
| 1
| null | 0
| 0.209253
| 0.058746
| 0.428476
| 0.270348
| 0.241706
| 0.216486
| 16
|
64,098
| 1,041,311
| 761,104
| 761,104
| null | 7,746,251
| 7,846,685
| 1,058,264
|
Dataset
|
12/16/2020 15:20:03
|
12/16/2020
| 910,143
| 183,855
| 2,567
| 5,005
| 1
|
08/25/2021
| 1,041,311
|
## Content
This dataset generated by respondents to a distributed survey via Amazon Mechanical Turk between 03.12.2016-05.12.2016. Thirty eligible Fitbit users consented to the submission of personal tracker data, including minute-level output for physical activity, heart rate, and sleep monitoring. Individual reports can be parsed by export session ID (column A) or timestamp (column B). Variation between output represents use of different types of Fitbit trackers and individual tracking behaviors / preferences.
#

#
### Starter Kernel(s)
- Julen Aranguren: https://www.kaggle.com/julenaranguren/bellabeat-case-study
- Anastasiia Chebotina: https://www.kaggle.com/chebotinaa/bellabeat-case-study-with-r
#
### Inspiration
- Human temporal routine behavioral analysis and pattern recognition
### Acknowlegement
Furberg, Robert; Brinton, Julia; Keating, Michael ; Ortiz, Alexa
https://zenodo.org/record/53894#.YMoUpnVKiP9
### Study
- [Bellabeat Case Study](https://medium.com/@somchukwumankwocha/bellabeat-case-study-c18835475563)
- [Machine Learning for Fatigue Detection using Fitbit Fitness Trackers](https://sintef.brage.unit.no/sintef-xmlui/bitstream/handle/11250/3055538/Machine_learning_for_fatigue_detection%2B%25288%2529.pdf?sequence=1&isAllowed=y)
- [How I analyzed the data from my FitBit to improve my overall health](https://www.freecodecamp.org/news/how-i-analyzed-the-data-from-my-fitbit-to-improve-my-overall-health-a2e36426d8f9/)
- [Fitbit Fitness Tracker Data Analysis](https://fromdatatostory.com/portfolio/fitbitanalysis/)
- [Evaluating my fitness by analyzing my Fitbit data archive](https://towardsdatascience.com/evaluating-my-fitness-by-analyzing-my-fitbit-data-archive-23a123baf349)
|
FitBit Fitness Tracker Data
|
Pattern recognition with tracker data: : Improve Your Overall Health
|
CC0: Public Domain
| 2
| 2
| null | 0
| 0.075285
| 0.046876
| 0.184755
| 0.65812
| 0.241259
| 0.216126
| 17
|
27,138
| 74,977
| 2,094,163
| 2,094,163
| null | 169,835
| 180,443
| 84,238
|
Dataset
|
11/09/2018 18:25:25
|
11/09/2018
| 2,091,500
| 405,140
| 4,804
| 1,505
| 1
|
11/06/2019
| 74,977
|
### Context
Marks secured by the students
### Content
This data set consists of the marks secured by the students in various subjects.
### Acknowledgements
http://roycekimmons.com/tools/generated_data/exams
### Inspiration
To understand the influence of the parents background, test preparation etc on students performance
|
Students Performance in Exams
|
Marks secured by the students in various subjects
|
Unknown
| 1
| 1
| null | 0
| 0.173005
| 0.087727
| 0.407123
| 0.197896
| 0.216438
| 0.195927
| 18
|
5,451
| 2,243
| 484,516
| null | 952
| 9,243
| 9,243
| 6,101
|
Dataset
|
08/28/2017 20:58:16
|
02/06/2018
| 1,517,576
| 256,517
| 3,046
| 2,945
| 1
|
11/06/2019
| 2,243
|
### Context
Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. Zalando intends Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.
The original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. "If it doesn't work on MNIST, it won't work at all", they said. "Well, if it does work on MNIST, it may still fail on others."
Zalando seeks to replace the original MNIST dataset
### Content
Each image is 28 pixels in height and 28 pixels in width, for a total of 784 pixels in total. Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker. This pixel-value is an integer between 0 and 255. The training and test data sets have 785 columns. The first column consists of the class labels (see above), and represents the article of clothing. The rest of the columns contain the pixel-values of the associated image.
- To locate a pixel on the image, suppose that we have decomposed x as x = i * 28 + j, where i and j are integers between 0 and 27. The pixel is located on row i and column j of a 28 x 28 matrix.
- For example, pixel31 indicates the pixel that is in the fourth column from the left, and the second row from the top, as in the ascii-diagram below.
<br><br>
**Labels**
Each training and test example is assigned to one of the following labels:
- 0 T-shirt/top
- 1 Trouser
- 2 Pullover
- 3 Dress
- 4 Coat
- 5 Sandal
- 6 Shirt
- 7 Sneaker
- 8 Bag
- 9 Ankle boot
<br><br>
TL;DR
- Each row is a separate image
- Column 1 is the class label.
- Remaining columns are pixel numbers (784 total).
- Each value is the darkness of the pixel (1 to 255)
### Acknowledgements
- Original dataset was downloaded from [https://github.com/zalandoresearch/fashion-mnist][1]
- Dataset was converted to CSV with this script: [https://pjreddie.com/projects/mnist-in-csv/][2]
### License
The MIT License (MIT) Copyright © [2017] Zalando SE, https://tech.zalando.com
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
[1]: https://github.com/zalandoresearch/fashion-mnist
[2]: https://pjreddie.com/projects/mnist-in-csv/
|
Fashion MNIST
|
An MNIST-like dataset of 70,000 28x28 labeled fashion images
|
Other (specified in description)
| 4
| 4
| null | 0
| 0.125531
| 0.055624
| 0.257773
| 0.387245
| 0.206543
| 0.187759
| 19
|
14,182
| 33,180
| 1,223,413
| 1,223,413
| null | 43,520
| 45,794
| 41,554
|
Dataset
|
06/25/2018 11:33:56
|
06/25/2018
| 1,811,531
| 284,703
| 5,685
| 2,005
| null | null | 33,180
| null | null | null | null | null | 0
| null | 0
| 0.149846
| 0.103815
| 0.286097
| 0.263642
| 0.20085
| 0.18303
| 20
|
19,997
| 13,720
| 1,616,098
| 1,616,098
| null | 18,513
| 18,513
| 21,253
|
Dataset
|
02/21/2018 00:15:14
|
02/21/2018
| 1,784,261
| 348,010
| 2,978
| 1,870
| 1
|
11/06/2019
| 13,720
|
## Context
Machine Learning with R by Brett Lantz is a book that provides an introduction to machine learning using R. As far as I can tell, Packt Publishing does not make its datasets available online unless you buy the book and create a user account which can be a problem if you are checking the book out from the library or borrowing the book from a friend. All of these datasets are in the public domain but simply needed some cleaning up and recoding to match the format in the book.
## Content
**Columns**
- age: age of primary beneficiary
- sex: insurance contractor gender, female, male
- bmi: Body mass index, providing an understanding of body, weights that are relatively high or low relative to height,
objective index of body weight (kg / m ^ 2) using the ratio of height to weight, ideally 18.5 to 24.9
- children: Number of children covered by health insurance / Number of dependents
- smoker: Smoking
- region: the beneficiary's residential area in the US, northeast, southeast, southwest, northwest.
- charges: Individual medical costs billed by health insurance
## Acknowledgements
The dataset is available on GitHub [here](https://github.com/stedy/Machine-Learning-with-R-datasets).
## Inspiration
Can you accurately predict insurance costs?
|
Medical Cost Personal Datasets
|
Insurance Forecast by using Linear Regression
|
Database: Open Database, Contents: Database Contents
| 1
| 1
| null | 0
| 0.147591
| 0.054382
| 0.349713
| 0.245891
| 0.199394
| 0.181817
| 21
|
13,417
| 4,458
| 1,132,983
| null | 7
| 8,204
| 8,204
| 10,170
|
Dataset
|
11/12/2017 14:08:43
|
02/06/2018
| 1,650,374
| 318,880
| 3,134
| 2,025
| 1
|
11/06/2019
| 4,458
|
### Context
The two datasets are related to red and white variants of the Portuguese "Vinho Verde" wine. For more details, consult the reference [Cortez et al., 2009]. Due to privacy and logistic issues, only physicochemical (inputs) and sensory (the output) variables are available (e.g. there is no data about grape types, wine brand, wine selling price, etc.).
These datasets can be viewed as classification or regression tasks. The classes are ordered and not balanced (e.g. there are much more normal wines than excellent or poor ones).
---
*This dataset is also available from the UCI machine learning repository, https://archive.ics.uci.edu/ml/datasets/wine+quality , I just shared it to kaggle for convenience. (If I am mistaken and the public license type disallowed me from doing so, I will take this down if requested.)*
### Content
For more information, read [Cortez et al., 2009].<br>
Input variables (based on physicochemical tests):<br>
1 - fixed acidity <br>
2 - volatile acidity <br>
3 - citric acid <br>
4 - residual sugar <br>
5 - chlorides <br>
6 - free sulfur dioxide <br>
7 - total sulfur dioxide <br>
8 - density <br>
9 - pH <br>
10 - sulphates <br>
11 - alcohol <br>
Output variable (based on sensory data): <br>
12 - quality (score between 0 and 10) <br>
### Tips
What might be an interesting thing to do, is aside from using regression modelling, is to set an arbitrary cutoff for your dependent variable (wine quality) at e.g. 7 or higher getting classified as 'good/1' and the remainder as 'not good/0'.
This allows you to practice with hyper parameter tuning on e.g. decision tree algorithms looking at the ROC curve and the AUC value.
Without doing any kind of feature engineering or overfitting you should be able to get an AUC of .88 (without even using random forest algorithm)
**KNIME** is a great tool (GUI) that can be used for this.<br>
1 - File Reader (for csv) to linear correlation node and to interactive histogram for basic EDA.<br>
2- File Reader to 'Rule Engine Node' to turn the 10 point scale to dichtome variable (good wine and rest), the code to put in the rule engine is something like this:<br>
- **$quality$ > 6.5 => "good"**<br>
- **TRUE => "bad"** <br>
3- Rule Engine Node output to input of Column Filter node to filter out your original 10point feature (this prevent leaking)<br>
4- Column Filter Node output to input of Partitioning Node (your standard train/tes split, e.g. 75%/25%, choose 'random' or 'stratified')<br>
5- Partitioning Node train data split output to input of Train data split to input Decision Tree Learner node and <br>
6- Partitioning Node test data split output to input Decision Tree predictor Node<br>
7- Decision Tree learner Node output to input Decision Tree Node input<br>
8- Decision Tree output to input ROC Node.. (here you can evaluate your model base on AUC value)<br>
### Inspiration
Use machine learning to determine which physiochemical properties make a wine 'good'!
### Acknowledgements
This dataset is also available from the UCI machine learning repository, https://archive.ics.uci.edu/ml/datasets/wine+quality , I just shared it to kaggle for convenience. *(I am mistaken and the public license type disallowed me from doing so, I will take this down at first request. I am not the owner of this dataset.*
**Please include this citation if you plan to use this database:
P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis.
Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553, 2009.**
### Relevant publication
P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis. Modeling wine preferences by data mining from physicochemical properties.
In Decision Support Systems, Elsevier, 47(4):547-553, 2009.
|
Red Wine Quality
|
Simple and clean practice dataset for regression or classification modelling
|
Database: Open Database, Contents: Database Contents
| 2
| 2
| null | 0
| 0.136516
| 0.057231
| 0.320441
| 0.266272
| 0.195115
| 0.178242
| 22
|
10,417
| 3,405
| 927,562
| 927,562
| null | 6,663
| 6,663
| 8,425
|
Dataset
|
10/24/2017 18:53:43
|
02/06/2018
| 2,049,402
| 429,937
| 3,806
| 727
| 1
|
11/06/2019
| 3,405
|
### Context
These files contain metadata for all 45,000 movies listed in the Full MovieLens Dataset. The dataset consists of movies released on or before July 2017. Data points include cast, crew, plot keywords, budget, revenue, posters, release dates, languages, production companies, countries, TMDB vote counts and vote averages.
This dataset also has files containing 26 million ratings from 270,000 users for all 45,000 movies. Ratings are on a scale of 1-5 and have been obtained from the official GroupLens website.
### Content
This dataset consists of the following files:
**movies_metadata.csv:** The main Movies Metadata file. Contains information on 45,000 movies featured in the Full MovieLens dataset. Features include posters, backdrops, budget, revenue, release dates, languages, production countries and companies.
**keywords.csv:** Contains the movie plot keywords for our MovieLens movies. Available in the form of a stringified JSON Object.
**credits.csv:** Consists of Cast and Crew Information for all our movies. Available in the form of a stringified JSON Object.
**links.csv:** The file that contains the TMDB and IMDB IDs of all the movies featured in the Full MovieLens dataset.
**links_small.csv:** Contains the TMDB and IMDB IDs of a small subset of 9,000 movies of the Full Dataset.
**ratings_small.csv:** The subset of 100,000 ratings from 700 users on 9,000 movies.
The Full MovieLens Dataset consisting of 26 million ratings and 750,000 tag applications from 270,000 users on all the 45,000 movies in this dataset can be accessed [here](https://grouplens.org/datasets/movielens/latest/)
### Acknowledgements
This dataset is an ensemble of data collected from TMDB and GroupLens.
The Movie Details, Credits and Keywords have been collected from the TMDB Open API. This product uses the TMDb API but is not endorsed or certified by TMDb. Their API also provides access to data on many additional movies, actors and actresses, crew members, and TV shows. You can try it for yourself [here](https://www.themoviedb.org/documentation/api).
The Movie Links and Ratings have been obtained from the Official GroupLens website. The files are a part of the dataset available [here](https://grouplens.org/datasets/movielens/latest/)

### Inspiration
This dataset was assembled as part of my second Capstone Project for Springboard's [Data Science Career Track](https://www.springboard.com/workshops/data-science-career-track). I wanted to perform an extensive EDA on Movie Data to narrate the history and the story of Cinema and use this metadata in combination with MovieLens ratings to build various types of Recommender Systems.
Both my notebooks are available as kernels with this dataset: [The Story of Film](https://www.kaggle.com/rounakbanik/the-story-of-film) and [Movie Recommender Systems](https://www.kaggle.com/rounakbanik/movie-recommender-systems)
Some of the things you can do with this dataset:
Predicting movie revenue and/or movie success based on a certain metric. What movies tend to get higher vote counts and vote averages on TMDB? Building Content Based and Collaborative Filtering Based Recommendation Engines.
|
The Movies Dataset
|
Metadata on over 45,000 movies. 26 million ratings from over 270,000 users.
|
CC0: Public Domain
| 7
| 7
| null | 0
| 0.169523
| 0.069502
| 0.432041
| 0.095595
| 0.191665
| 0.175352
| 23
|
8,751
| 894
| 495,305
| null | 485
| 813,759
| 836,098
| 2,741
|
Dataset
|
02/28/2017 15:00:38
|
02/06/2018
| 1,919,530
| 365,741
| 4,394
| 1,169
| 1
|
11/06/2019
| 894
|
### Context
The World Happiness Report is a landmark survey of the state of global happiness. The first report was published in 2012, the second in 2013, the third in 2015, and the fourth in the 2016 Update. The World Happiness 2017, which ranks 155 countries by their happiness levels, was released at the United Nations at an event celebrating International Day of Happiness on March 20th. The report continues to gain global recognition as governments, organizations and civil society increasingly use happiness indicators to inform their policy-making decisions. Leading experts across fields – economics, psychology, survey analysis, national statistics, health, public policy and more – describe how measurements of well-being can be used effectively to assess the progress of nations. The reports review the state of happiness in the world today and show how the new science of happiness explains personal and national variations in happiness.
### Content
The happiness scores and rankings use data from the Gallup World Poll. The scores are based on answers to the main life evaluation question asked in the poll. This question, known as the Cantril ladder, asks respondents to think of a ladder with the best possible life for them being a 10 and the worst possible life being a 0 and to rate their own current lives on that scale. The scores are from nationally representative samples for the years 2013-2016 and use the Gallup weights to make the estimates representative. The columns following the happiness score estimate the extent to which each of six factors – economic production, social support, life expectancy, freedom, absence of corruption, and generosity – contribute to making life evaluations higher in each country than they are in Dystopia, a hypothetical country that has values equal to the world’s lowest national averages for each of the six factors. They have no impact on the total score reported for each country, but they do explain why some countries rank higher than others.
### Inspiration
What countries or regions rank the highest in overall happiness and each of the six factors contributing to happiness? How did country ranks or scores change between the 2015 and 2016 as well as the 2016 and 2017 reports? Did any country experience a significant increase or decrease in happiness?
**What is Dystopia?**
Dystopia is an imaginary country that has the world’s least-happy people. The purpose in establishing Dystopia is to have a benchmark against which all countries can be favorably compared (no country performs more poorly than Dystopia) in terms of each of the six key variables, thus allowing each sub-bar to be of positive width. The lowest scores observed for the six key variables, therefore, characterize Dystopia. Since life would be very unpleasant in a country with the world’s lowest incomes, lowest life expectancy, lowest generosity, most corruption, least freedom and least social support, it is referred to as “Dystopia,” in contrast to Utopia.
**What are the residuals?**
The residuals, or unexplained components, differ for each country, reflecting the extent to which the six variables either over- or under-explain average 2014-2016 life evaluations. These residuals have an average value of approximately zero over the whole set of countries. Figure 2.2 shows the average residual for each country when the equation in Table 2.1 is applied to average 2014- 2016 data for the six variables in that country. We combine these residuals with the estimate for life evaluations in Dystopia so that the combined bar will always have positive values. As can be seen in Figure 2.2, although some life evaluation residuals are quite large, occasionally exceeding one point on the scale from 0 to 10, they are always much smaller than the calculated value in Dystopia, where the average life is rated at 1.85 on the 0 to 10 scale.
**What do the columns succeeding the Happiness Score(like Family, Generosity, etc.) describe?**
The following columns: GDP per Capita, Family, Life Expectancy, Freedom, Generosity, Trust Government Corruption describe the extent to which these factors contribute in evaluating the happiness in each country.
The Dystopia Residual metric actually is the Dystopia Happiness Score(1.85) + the Residual value or the unexplained value for each country as stated in the previous answer.
If you add all these factors up, you get the happiness score so it might be un-reliable to model them to predict Happiness Scores.
#[Start a new kernel][1]
[1]: https://www.kaggle.com/unsdsn/world-happiness/kernels?modal=true
|
World Happiness Report
|
Happiness scored according to economic production, social support, etc.
|
CC0: Public Domain
| 2
| 3
| null | 0
| 0.15878
| 0.08024
| 0.367531
| 0.153715
| 0.190066
| 0.174009
| 24
|
31,538
| 134,715
| 2,483,565
| 2,483,565
| null | 320,111
| 333,307
| 144,904
|
Dataset
|
03/09/2019 06:32:21
|
03/09/2019
| 1,378,531
| 340,277
| 1,518
| 1,713
| 1
|
07/26/2020
| 134,715
|
IMDB dataset having 50K movie reviews for natural language processing or Text analytics.
This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training and 25,000 for testing. So, predict the number of positive and negative reviews using either classification or deep learning algorithms.
For more dataset information, please go through the following link,
http://ai.stanford.edu/~amaas/data/sentiment/
|
IMDB Dataset of 50K Movie Reviews
|
Large Movie Review Dataset
|
Other (specified in description)
| 1
| 1
| null | 0
| 0.11403
| 0.02772
| 0.341943
| 0.225247
| 0.177235
| 0.163168
| 25
|
6,965
| 63
| 655,525
| 655,525
| null | 589
| 589
| 1,357
|
Dataset
|
07/09/2016 13:40:34
|
02/06/2018
| 1,858,807
| 255,810
| 4,796
| 1,597
| 1
|
11/06/2019
| 63
|
The ultimate Soccer database for data analysis and machine learning
-------------------------------------------------------------------
**What you get:**
- +25,000 matches
- +10,000 players
- 11 European Countries with their lead championship
- Seasons 2008 to 2016
- Players and Teams' attributes* sourced from EA Sports' FIFA video game series, including the weekly updates
- Team line up with squad formation (X, Y coordinates)
- Betting odds from up to 10 providers
- Detailed match events (goal types, possession, corner, cross, fouls, cards etc...) for +10,000 matches
**16th Oct 2016: New table containing teams' attributes from FIFA !*
----------
**Original Data Source:**
You can easily find data about soccer matches but they are usually scattered across different websites. A thorough data collection and processing has been done to make your life easier. **I must insist that you do not make any commercial use of the data**. The data was sourced from:
- [http://football-data.mx-api.enetscores.com/][1] : scores, lineup, team formation and events
- [http://www.football-data.co.uk/][2] : betting odds. [Click here to understand the column naming system for betting odds:][3]
- [http://sofifa.com/][4] : players and teams attributes from EA Sports FIFA games. *FIFA series and all FIFA assets property of EA Sports.*
> When you have a look at the database, you will notice foreign keys for
> players and matches are the same as the original data sources. I have
> called those foreign keys "api_id".
----------
**Improving the dataset:**
You will notice that some players are missing from the lineup (NULL values). This is because I have not been able to source their attributes from FIFA. This will be fixed overtime as the crawling algorithm is being improved.
The dataset will also be expanded to include international games, national cups, Champion's League and Europa League. Please ask me if you're after a specific tournament.
> Please get in touch with me if you want to help improve this dataset.
[CLICK HERE TO ACCESS THE PROJECT GITHUB][5]
*Important note for people interested in using the crawlers:* since I first wrote the crawling scripts (in python), it appears sofifa.com has changed its design and with it comes new requirements for the scripts. The existing script to crawl players ('Player Spider') will not work until i've updated it.
----------
Exploring the data:
Now that's the fun part, there is a lot you can do with this dataset. I will be adding visuals and insights to this overview page but please have a look at the kernels and give it a try yourself ! Here are some ideas for you:
**The Holy Grail...**
... is obviously to predict the outcome of the game. The bookies use 3 classes (Home Win, Draw, Away Win). They get it right about 53% of the time. This is also what I've achieved so far using my own SVM. Though it may sound high for such a random sport game, you've got to know
that the home team wins about 46% of the time. So the base case (constantly predicting Home Win) has indeed 46% precision.
**Probabilities vs Odds**
When running a multi-class classifier like SVM you could also output a probability estimate and compare it to the betting odds. Have a look at your variance vs odds and see for what games you had very different predictions.
**Explore and visualize features**
With access to players and teams attributes, team formations and in-game events you should be able to produce some interesting insights into [The Beautiful Game][6] . Who knows, Guardiola himself may hire one of you some day!
[1]: http://football-data.mx-api.enetscores.com/
[2]: http://www.football-data.co.uk/
[3]: http://www.football-data.co.uk/notes.txt
[4]: http://sofifa.com/
[5]: https://github.com/hugomathien/football-data-collection/tree/master/footballData
[6]: https://en.wikipedia.org/wiki/The_Beautiful_Game
|
European Soccer Database
|
25k+ matches, players & teams attributes for European Professional Football
|
Database: Open Database, Contents: © Original Authors
| 10
| 10
| null | 0
| 0.153757
| 0.087581
| 0.257062
| 0.209993
| 0.177098
| 0.163052
| 26
|
14,349
| 2,321
| 1,236,717
| 1,236,717
| null | 3,919
| 3,919
| 6,243
|
Dataset
|
09/04/2017 03:09:09
|
02/05/2018
| 456,183
| 58,647
| 1,394
| 4,433
| 1
|
09/02/2020
| 2,321
|
**General Info**
This is a set of just over 20,000 games collected from a selection of users on the site Lichess.org, and how to collect more. I will also upload more games in the future as I collect them. This set contains the:
- Game ID;
- Rated (T/F);
- Start Time;
- End Time;
- Number of Turns;
- Game Status;
- Winner;
- Time Increment;
- White Player ID;
- White Player Rating;
- Black Player ID;
- Black Player Rating;
- All Moves in Standard Chess Notation;
- Opening Eco (Standardised Code for any given opening, [list here][1]);
- Opening Name;
- Opening Ply (Number of moves in the opening phase)
For each of these separate games from Lichess. I collected this data using the [Lichess API][2], which enables collection of any given users game history. The difficult part was collecting usernames to use, however the API also enables dumping of all users in a Lichess team. There are several teams on Lichess with over 1,500 players, so this proved an effective way to get users to collect games from.
**Possible Uses**
Lots of information is contained within a single chess game, let alone a full dataset of multiple games. It is primarily a game of patterns, and data science is all about detecting patterns in data, which is why chess has been one of the most invested in areas of AI in the past. This dataset collects all of the information available from 20,000 games and presents it in a format that is easy to process for analysis of, for example, what allows a player to win as black or white, how much meta (out-of-game) factors affect a game, the relationship between openings and victory for black and white and more.
[1]: https://www.365chess.com/eco.php
[2]: https://github.com/ornicar/lila
|
Chess Game Dataset (Lichess)
|
20,000+ Lichess Games, including moves, victor, rating, opening details and more
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.037735
| 0.025456
| 0.058934
| 0.582906
| 0.176258
| 0.162338
| 27
|
27,548
| 49,864
| 2,115,707
| 2,115,707
| null | 274,957
| 287,262
| 58,489
|
Dataset
|
09/04/2018 18:19:51
|
09/04/2018
| 1,998,685
| 282,267
| 5,047
| 1,146
| 1
|
11/06/2019
| 49,864
|
# [ADVISORY] IMPORTANT #
## Instructions for citation:
If you use this dataset anywhere in your work, kindly cite as the below:
L. Gupta, "Google Play Store Apps," Feb 2019. [Online]. Available: https://www.kaggle.com/lava18/google-play-store-apps
### Context
While many public datasets (on Kaggle and the like) provide Apple App Store data, there are not many counterpart datasets available for Google Play Store apps anywhere on the web. On digging deeper, I found out that iTunes App Store page deploys a nicely indexed appendix-like structure to allow for simple and easy web scraping. On the other hand, Google Play Store uses sophisticated modern-day techniques (like dynamic page load) using JQuery making scraping more challenging.
### Content
Each app (row) has values for catergory, rating, size, and more.
### Acknowledgements
This information is scraped from the Google Play Store. This app information would not be available without it.
### Inspiration
The Play Store apps data has enormous potential to drive app-making businesses to success. Actionable insights can be drawn for developers to work on and capture the Android market!
|
Google Play Store Apps
|
Data of 10k Play Store apps for analysing the Android market.
|
CC BY-SA 4.0
| 6
| 6
| null | 0
| 0.165328
| 0.092164
| 0.283649
| 0.15069
| 0.172958
| 0.159528
| 28
|
8,121
| 9,366
| 753,574
| 753,574
| null | 13,206
| 13,206
| 16,662
|
Dataset
|
01/11/2018 16:04:39
|
02/05/2018
| 330,693
| 52,726
| 957
| 4,476
| 1
|
12/15/2020
| 9,366
|
### Context
The Ramen Rater is a product review website for the hardcore ramen enthusiast (or "ramenphile"), with over 2500 reviews to date. This dataset is an export of "The Big List" (of reviews), converted to a CSV format.
### Content
Each record in the dataset is a single ramen product review. Review numbers are contiguous: more recently reviewed ramen varieties have higher numbers. Brand, Variety (the product name), Country, and Style (Cup? Bowl? Tray?) are pretty self-explanatory. Stars indicate the ramen quality, as assessed by the reviewer, on a 5-point scale; this is the most important column in the dataset!
Note that this dataset does *not* include the text of the reviews themselves. For that, you should browse through https://www.theramenrater.com/ instead!
### Acknowledgements
This dataset is republished as-is from the original [BIG LIST](https://www.theramenrater.com/resources-2/the-list/) on https://www.theramenrater.com/.
### Inspiration
* What ingredients or flavors are most commonly advertised on ramen package labels?
* How do ramen ratings compare against ratings for other food products (like, say, wine)?
* How is ramen manufacturing internationally distributed?
|
Ramen Ratings
|
Over 2500 ramen ratings
|
Data files © Original Authors
| 1
| 1
| null | 0
| 0.027354
| 0.017476
| 0.052984
| 0.58856
| 0.171594
| 0.158365
| 29
|
18,807
| 55,151
| 1,549,225
| null | 1,942
| 2,669,146
| 2,713,424
| 63,908
|
Dataset
|
09/21/2018 15:23:38
|
09/21/2018
| 1,796,403
| 381,368
| 3,704
| 616
| 1
|
11/06/2019
| 55,151
|
# Brazilian E-Commerce Public Dataset by Olist
Welcome! This is a Brazilian ecommerce public dataset of orders made at [Olist Store](http://www.olist.com). The dataset has information of 100k orders from 2016 to 2018 made at multiple marketplaces in Brazil. Its features allows viewing an order from multiple dimensions: from order status, price, payment and freight performance to customer location, product attributes and finally reviews written by customers. We also released a geolocation dataset that relates Brazilian zip codes to lat/lng coordinates.
This is real commercial data, it has been anonymised, and references to the companies and partners in the review text have been replaced with the names of Game of Thrones great houses.
## Join it With the Marketing Funnel by Olist
We have also released a [Marketing Funnel Dataset](https://www.kaggle.com/olistbr/marketing-funnel-olist/home). You may join both datasets and see an order from Marketing perspective now!
**Instructions on joining are available on this [Kernel](https://www.kaggle.com/andresionek/joining-marketing-funnel-with-brazilian-e-commerce).**
## Context
This dataset was generously provided by Olist, the largest department store in Brazilian marketplaces. Olist connects small businesses from all over Brazil to channels without hassle and with a single contract. Those merchants are able to sell their products through the Olist Store and ship them directly to the customers using Olist logistics partners. See more on our website: [www.olist.com](https://www.olist.com)
After a customer purchases the product from Olist Store a seller gets notified to fulfill that order. Once the customer receives the product, or the estimated delivery date is due, the customer gets a satisfaction survey by email where he can give a note for the purchase experience and write down some comments.
### Attention
1. An order might have multiple items.
2. Each item might be fulfilled by a distinct seller.
3. All text identifying stores and partners where replaced by the names of Game of Thrones great houses.
### Example of a product listing on a marketplace

## Data Schema
The data is divided in multiple datasets for better understanding and organization. Please refer to the following data schema when working with it:

## Classified Dataset
We had previously released a classified dataset, but we removed it at *Version 6*. We intend to release it again as a new dataset with a new data schema. While we don't finish it, you may use the classified dataset available at the *Version 5* or previous.
## Inspiration
Here are some inspiration for possible outcomes from this dataset.
**NLP:** <br>
This dataset offers a supreme environment to parse out the reviews text through its multiple dimensions.
**Clustering:**<br>
Some customers didn't write a review. But why are they happy or mad?
**Sales Prediction:**<br>
With purchase date information you'll be able to predict future sales.
**Delivery Performance:**<br>
You will also be able to work through delivery performance and find ways to optimize delivery times.
**Product Quality:** <br>
Enjoy yourself discovering the products categories that are more prone to customer insatisfaction.
**Feature Engineering:** <br>
Create features from this rich dataset or attach some external public information to it.
## Acknowledgements
Thanks to Olist for releasing this dataset.
|
Brazilian E-Commerce Public Dataset by Olist
|
100,000 Orders with product, customer and reviews info
|
CC BY-NC-SA 4.0
| 2
| 8
| null | 0
| 0.148595
| 0.067639
| 0.383235
| 0.080999
| 0.170117
| 0.157104
| 30
|
15,194
| 494,766
| 1,302,389
| 1,302,389
| null | 1,402,868
| 1,435,700
| 507,860
|
Dataset
|
01/30/2020 14:46:58
|
01/30/2020
| 1,465,690
| 397,014
| 2,681
| 749
| 1
|
03/18/2020
| 494,766
|
[](https://forthebadge.com) [](https://forthebadge.com)
### Context
- A new coronavirus designated 2019-nCoV was first identified in Wuhan, the capital of China's Hubei province
- People developed pneumonia without a clear cause and for which existing vaccines or treatments were not effective.
- The virus has shown evidence of human-to-human transmission
- Transmission rate (rate of infection) appeared to escalate in mid-January 2020
- As of 30 January 2020, approximately 8,243 cases have been confirmed
### Content
> * **full_grouped.csv** - Day to day country wise no. of cases (Has County/State/Province level data)
> * **covid_19_clean_complete.csv** - Day to day country wise no. of cases (Doesn't have County/State/Province level data)
> * **country_wise_latest.csv** - Latest country level no. of cases
> * **day_wise.csv** - Day wise no. of cases (Doesn't have country level data)
> * **usa_county_wise.csv** - Day to day county level no. of cases
> * **worldometer_data.csv** - Latest data from https://www.worldometers.info/
### Acknowledgements / Data Source
> https://github.com/CSSEGISandData/COVID-19
> https://www.worldometers.info/
### Collection methodology
> https://github.com/imdevskp/covid_19_jhu_data_web_scrap_and_cleaning
### Cover Photo
> Photo from National Institutes of Allergy and Infectious Diseases
> https://www.niaid.nih.gov/news-events/novel-coronavirus-sarscov2-images
> https://blogs.cdc.gov/publichealthmatters/2019/04/h1n1/
### Similar Datasets
> * COVID-19 - https://www.kaggle.com/imdevskp/corona-virus-report
> * MERS - https://www.kaggle.com/imdevskp/mers-outbreak-dataset-20122019
> * Ebola Western Africa 2014 Outbreak - https://www.kaggle.com/imdevskp/ebola-outbreak-20142016-complete-dataset
> * H1N1 | Swine Flu 2009 Pandemic Dataset - https://www.kaggle.com/imdevskp/h1n1-swine-flu-2009-pandemic-dataset
> * SARS 2003 Pandemic - https://www.kaggle.com/imdevskp/sars-outbreak-2003-complete-dataset
> * HIV AIDS - https://www.kaggle.com/imdevskp/hiv-aids-dataset
|
COVID-19 Dataset
|
Number of Confirmed, Death and Recovered cases every day across the globe
|
Other (specified in description)
| 166
| 166
| null | 0
| 0.121239
| 0.048958
| 0.398957
| 0.098488
| 0.166911
| 0.15436
| 31
|
13,189
| 2,478
| 1,162,990
| 1,162,990
| null | 1,151,655
| 1,182,398
| 6,555
|
Dataset
|
09/13/2017 22:41:53
|
02/05/2018
| 417,677
| 40,117
| 1,380
| 4,290
| 1
|
02/24/2020
| 2,478
|
### Context:
This data publication contains a spatial database of wildfires that occurred in the United States from 1992 to 2015. It is the third update of a publication originally generated to support the national Fire Program Analysis (FPA) system. The wildfire records were acquired from the reporting systems of federal, state, and local fire organizations. The following core data elements were required for records to be included in this data publication: discovery date, final fire size, and a point location at least as precise as Public Land Survey System (PLSS) section (1-square mile grid). The data were transformed to conform, when possible, to the data standards of the National Wildfire Coordinating Group (NWCG). Basic error-checking was performed and redundant records were identified and removed, to the degree possible. The resulting product, referred to as the Fire Program Analysis fire-occurrence database (FPA FOD), includes 1.88 million geo-referenced wildfire records, representing a total of 140 million acres burned during the 24-year period.
### Content:
This dataset is an SQLite database that contains the following information:
* Fires: Table including wildfire data for the period of 1992-2015 compiled from US federal, state, and local reporting systems.
* FOD_ID = Global unique identifier.
* FPA_ID = Unique identifier that contains information necessary to track back to the original record in the source dataset.
* SOURCE_SYSTEM_TYPE = Type of source database or system that the record was drawn from (federal, nonfederal, or interagency).
* SOURCE_SYSTEM = Name of or other identifier for source database or system that the record was drawn from. See Table 1 in Short (2014), or \Supplements\FPA_FOD_source_list.pdf, for a list of sources and their identifier.
* NWCG_REPORTING_AGENCY = Active National Wildlife Coordinating Group (NWCG) Unit Identifier for the agency preparing the fire report (BIA = Bureau of Indian Affairs, BLM = Bureau of Land Management, BOR = Bureau of Reclamation, DOD = Department of Defense, DOE = Department of Energy, FS = Forest Service, FWS = Fish and Wildlife Service, IA = Interagency Organization, NPS = National Park Service, ST/C&L = State, County, or Local Organization, and TRIBE = Tribal Organization).
* NWCG_REPORTING_UNIT_ID = Active NWCG Unit Identifier for the unit preparing the fire report.
* NWCG_REPORTING_UNIT_NAME = Active NWCG Unit Name for the unit preparing the fire report.
* SOURCE_REPORTING_UNIT = Code for the agency unit preparing the fire report, based on code/name in the source dataset.
* SOURCE_REPORTING_UNIT_NAME = Name of reporting agency unit preparing the fire report, based on code/name in the source dataset.
* LOCAL_FIRE_REPORT_ID = Number or code that uniquely identifies an incident report for a particular reporting unit and a particular calendar year.
* LOCAL_INCIDENT_ID = Number or code that uniquely identifies an incident for a particular local fire management organization within a particular calendar year.
* FIRE_CODE = Code used within the interagency wildland fire community to track and compile cost information for emergency fire suppression (https://www.firecode.gov/).
* FIRE_NAME = Name of the incident, from the fire report (primary) or ICS-209 report (secondary).
* ICS_209_INCIDENT_NUMBER = Incident (event) identifier, from the ICS-209 report.
* ICS_209_NAME = Name of the incident, from the ICS-209 report.
* MTBS_ID = Incident identifier, from the MTBS perimeter dataset.
* MTBS_FIRE_NAME = Name of the incident, from the MTBS perimeter dataset.
* COMPLEX_NAME = Name of the complex under which the fire was ultimately managed, when discernible.
* FIRE_YEAR = Calendar year in which the fire was discovered or confirmed to exist.
* DISCOVERY_DATE = Date on which the fire was discovered or confirmed to exist.
* DISCOVERY_DOY = Day of year on which the fire was discovered or confirmed to exist.
* DISCOVERY_TIME = Time of day that the fire was discovered or confirmed to exist.
* STAT_CAUSE_CODE = Code for the (statistical) cause of the fire.
* STAT_CAUSE_DESCR = Description of the (statistical) cause of the fire.
* CONT_DATE = Date on which the fire was declared contained or otherwise controlled (mm/dd/yyyy where mm=month, dd=day, and yyyy=year).
* CONT_DOY = Day of year on which the fire was declared contained or otherwise controlled.
* CONT_TIME = Time of day that the fire was declared contained or otherwise controlled (hhmm where hh=hour, mm=minutes).
* FIRE_SIZE = Estimate of acres within the final perimeter of the fire.
* FIRE_SIZE_CLASS = Code for fire size based on the number of acres within the final fire perimeter expenditures (A=greater than 0 but less than or equal to 0.25 acres, B=0.26-9.9 acres, C=10.0-99.9 acres, D=100-299 acres, E=300 to 999 acres, F=1000 to 4999 acres, and G=5000+ acres).
* LATITUDE = Latitude (NAD83) for point location of the fire (decimal degrees).
* LONGITUDE = Longitude (NAD83) for point location of the fire (decimal degrees).
* OWNER_CODE = Code for primary owner or entity responsible for managing the land at the point of origin of the fire at the time of the incident.
* OWNER_DESCR = Name of primary owner or entity responsible for managing the land at the point of origin of the fire at the time of the incident.
* STATE = Two-letter alphabetic code for the state in which the fire burned (or originated), based on the nominal designation in the fire report.
* COUNTY = County, or equivalent, in which the fire burned (or originated), based on nominal designation in the fire report.
* FIPS_CODE = Three-digit code from the Federal Information Process Standards (FIPS) publication 6-4 for representation of counties and equivalent entities.
* FIPS_NAME = County name from the FIPS publication 6-4 for representation of counties and equivalent entities.
* NWCG_UnitIDActive_20170109: Look-up table containing all NWCG identifiers for agency units that were active (i.e., valid) as of 9 January 2017, when the list was downloaded from https://www.nifc.blm.gov/unit_id/Publish.html and used as the source of values available to populate the following fields in the Fires table: NWCG_REPORTING_AGENCY, NWCG_REPORTING_UNIT_ID, and NWCG_REPORTING_UNIT_NAME.
* UnitId = NWCG Unit ID.
* GeographicArea = Two-letter code for the geographic area in which the unit is located (NA=National, IN=International, AK=Alaska, CA=California, EA=Eastern Area, GB=Great Basin, NR=Northern Rockies, NW=Northwest, RM=Rocky Mountain, SA=Southern Area, and SW=Southwest).
* Gacc = Seven or eight-letter code for the Geographic Area Coordination Center in which the unit is located or primarily affiliated with (CAMBCIFC=Canadian Interagency Forest Fire Centre, USAKCC=Alaska Interagency Coordination Center, USCAONCC=Northern California Area Coordination Center, USCAOSCC=Southern California Coordination Center, USCORMCC=Rocky Mountain Area Coordination Center, USGASAC=Southern Area Coordination Center, USIDNIC=National Interagency Coordination Center, USMTNRC=Northern Rockies Coordination Center, USNMSWC=Southwest Area Coordination Center, USORNWC=Northwest Area Coordination Center, USUTGBC=Western Great Basin Coordination Center, USWIEACC=Eastern Area Coordination Center).
* WildlandRole = Role of the unit within the wildland fire community.
* UnitType = Type of unit (e.g., federal, state, local).
* Department = Department (or state/territory) to which the unit belongs (AK=Alaska, AL=Alabama, AR=Arkansas, AZ=Arizona, CA=California, CO=Colorado, CT=Connecticut, DE=Delaware, DHS=Department of Homeland Security, DOC= Department of Commerce, DOD=Department of Defense, DOE=Department of Energy, DOI= Department of Interior, DOL=Department of Labor, FL=Florida, GA=Georgia, IA=Iowa, IA/GC=Non-Departmental Agencies, ID=Idaho, IL=Illinois, IN=Indiana, KS=Kansas, KY=Kentucky, LA=Louisiana, MA=Massachusetts, MD=Maryland, ME=Maine, MI=Michigan, MN=Minnesota, MO=Missouri, MS=Mississippi, MT=Montana, NC=North Carolina, NE=Nebraska, NG=Non-Government, NH=New Hampshire, NJ=New Jersey, NM=New Mexico, NV=Nevada, NY=New York, OH=Ohio, OK=Oklahoma, OR=Oregon, PA=Pennsylvania, PR=Puerto Rico, RI=Rhode Island, SC=South Carolina, SD=South Dakota, ST/L=State or Local Government, TN=Tennessee, Tribe=Tribe, TX=Texas, USDA=Department of Agriculture, UT=Utah, VA=Virginia, VI=U. S. Virgin Islands, VT=Vermont, WA=Washington, WI=Wisconsin, WV=West Virginia, WY=Wyoming).
* Agency = Agency or bureau to which the unit belongs (AG=Air Guard, ANC=Alaska Native Corporation, BIA=Bureau of Indian Affairs, BLM=Bureau of Land Management, BOEM=Bureau of Ocean Energy Management, BOR=Bureau of Reclamation, BSEE=Bureau of Safety and Environmental Enforcement, C&L=County & Local, CDF=California Department of Forestry & Fire Protection, DC=Department of Corrections, DFE=Division of Forest Environment, DFF=Division of Forestry Fire & State Lands, DFL=Division of Forests and Land, DFR=Division of Forest Resources, DL=Department of Lands, DNR=Department of Natural Resources, DNRC=Department of Natural Resources and Conservation, DNRF=Department of Natural Resources Forest Service, DOA=Department of Agriculture, DOC=Department of Conservation, DOE=Department of Energy, DOF=Department of Forestry, DVF=Division of Forestry, DWF=Division of Wildland Fire, EPA=Environmental Protection Agency, FC=Forestry Commission, FEMA=Federal Emergency Management Agency, FFC=Bureau of Forest Fire Control, FFP=Forest Fire Protection, FFS=Forest Fire Service, FR=Forest Rangers, FS=Forest Service, FWS=Fish & Wildlife Service, HQ=Headquarters, JC=Job Corps, NBC=National Business Center, NG=National Guard, NNSA=National Nuclear Security Administration, NPS=National Park Service, NWS=National Weather Service, OES=Office of Emergency Services, PRI=Private, SF=State Forestry, SFS=State Forest Service, SP=State Parks, TNC=The Nature Conservancy, USA=United States Army, USACE=United States Army Corps of Engineers, USAF=United States Air Force, USGS=United States Geological Survey, USN=United States Navy).
* Parent = Agency subgroup to which the unit belongs (A concatenation of State and Unit from this report - https://www.nifc.blm.gov/unit_id/publish/UnitIdReport.rtf).
* Country = Country in which the unit is located (e.g. US = United States).
* State = Two-letter code for the state in which the unit is located (or primarily affiliated).
* Code = Unit code (follows state code to create UnitId).
* Name = Unit name.
### Acknowledgements:
These data were collected using funding from the U.S. Government and can be used without additional permissions or fees. If you use these data in a publication, presentation, or other research product please use the following citation:
Short, Karen C. 2017. Spatial wildfire occurrence data for the United States, 1992-2015 [FPA_FOD_20170508]. 4th Edition. Fort Collins, CO: Forest Service Research Data Archive. https://doi.org/10.2737/RDS-2013-0009.4
### Inspiration:
* Have wildfires become more or less frequent over time?
* What counties are the most and least fire-prone?
* Given the size, location and date, can you predict the cause of a fire wildfire?
|
1.88 Million US Wildfires
|
24 years of geo-referenced wildfire records
|
CC0: Public Domain
| 2
| 2
| null | 0
| 0.034549
| 0.0252
| 0.040313
| 0.564103
| 0.166041
| 0.153615
| 32
|
4,397
| 483
| 395,512
| null | 7
| 982
| 982
| 2,105
|
Dataset
|
12/02/2016 19:29:17
|
02/06/2018
| 1,167,154
| 309,098
| 1,735
| 1,631
| 1
|
11/06/2019
| 483
|
## Context
The SMS Spam Collection is a set of SMS tagged messages that have been collected for SMS Spam research. It contains one set of SMS messages in English of 5,574 messages, tagged acording being ham (legitimate) or spam.
## Content
The files contain one message per line. Each line is composed by two columns: v1 contains the label (ham or spam) and v2 contains the raw text.
This corpus has been collected from free or free for research sources at the Internet:
-> A collection of 425 SMS spam messages was manually extracted from the Grumbletext Web site. This is a UK forum in which cell phone users make public claims about SMS spam messages, most of them without reporting the very spam message received. The identification of the text of spam messages in the claims is a very hard and time-consuming task, and it involved carefully scanning hundreds of web pages. The Grumbletext Web site is:[ \[Web Link\]][1].
-> A subset of 3,375 SMS randomly chosen ham messages of the NUS SMS Corpus (NSC), which is a dataset of about 10,000 legitimate messages collected for research at the Department of Computer Science at the National University of Singapore. The messages largely originate from Singaporeans and mostly from students attending the University. These messages were collected from volunteers who were made aware that their contributions were going to be made publicly available. The NUS SMS Corpus is avalaible at:[ \[Web Link\]][2].
-> A list of 450 SMS ham messages collected from Caroline Tag's PhD Thesis available at[ \[Web Link\]][3].
-> Finally, we have incorporated the SMS Spam Corpus v.0.1 Big. It has 1,002 SMS ham messages and 322 spam messages and it is public available at:[ \[Web Link\]][4]. This corpus has been used in the following academic researches:
## Acknowledgements
The original dataset can be found [here](https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection). The creators would like to note that in case you find the dataset useful, please make a reference to previous paper and the web page: http://www.dt.fee.unicamp.br/~tiago/smsspamcollection/ in your papers, research, etc.
We offer a comprehensive study of this corpus in the following paper. This work presents a number of statistics, studies and baseline results for several machine learning methods.
Almeida, T.A., Gómez Hidalgo, J.M., Yamakami, A. Contributions to the Study of SMS Spam Filtering: New Collection and Results. Proceedings of the 2011 ACM Symposium on Document Engineering (DOCENG'11), Mountain View, CA, USA, 2011.
## Inspiration
* Can you use this dataset to build a prediction model that will accurately classify which texts are spam?
[1]: http://www.grumbletext.co.uk/
[2]: http://www.comp.nus.edu.sg/~rpnlpir/downloads/corpora/smsCorpus/
[3]: http://etheses.bham.ac.uk/253/1/Tagg09PhD.pdf
[4]: http://www.esp.uem.es/jmgomez/smsspamcorpus/
|
SMS Spam Collection Dataset
|
Collection of SMS messages tagged as spam or legitimate
|
Unknown
| 1
| 1
|
lending, banking, crowdfunding, insurance, investing, currencies and foreign exchange
| 6
| 0.096545
| 0.031683
| 0.310611
| 0.214464
| 0.163326
| 0.151283
| 33
|
4,073
| 10,128
| 360,751
| 360,751
| null | 5,438,389
| 5,512,409
| 17,476
|
Dataset
|
01/17/2018 23:57:36
|
02/04/2018
| 145,239
| 23,871
| 490
| 4,620
| 1
|
01/21/2021
| 10,128
|
#### PAID ADVERTISEMENT
Part 2 of the dataset is complete (for now!) There you'll find data specific to the Supplemental Nutrition Assistance (SNAP) Program. The US SNAP program provides food benefits to low-income families to supplement their grocery budget.
Link: [US Public Food Assistance 2 - SNAP](https://www.kaggle.com/datasets/jpmiller/food-security)
Please click on the ▲ if you find it useful -- it has almost 500 downloads!
## Context
This dataset, Part 1, addresses another US program, the Special Supplemental Nutrition Program for Women, Infants, and Children Program, or simply WIC. The program allocates Federal and State funds to help low-income women and children up to age five who are at nutritional risk. Funds are used to provide supplemental foods, baby formula, health care, and nutrition education.
## Content
Files may include participation data and spending for state programs, and poverty data for each state. Data for WIC covers fiscal years 2013-2016, which is actually October 2012 through September 2016.
## Motivation
My original purpose here is two-fold:
- Explore various aspects of US Public Assistance. Show trends over recent years and better understand differences across state agencies. Although the federal government sponsors the program and provides funding, program are administered at the state level and can widely vary. Indian nations (native Americans) also administer their own programs.
- Share with the Kaggle Community the joy - and pain - of working with government data. Data is often spread across numerous agency sites and comes in a variety of formats. Often the data is provided in Excel, with the files consisting of multiple tabs. Also, files are formatted as reports and contain aggregated data (sums, averages, etc.) along with base data.
As of March 2nd, I am expanding the purpose to support the [M5 Forecasting Challenges](https://www.kaggle.com/c/m5-forecasting-accuracy/overview/evaluation) here on Kaggle. Store sales are partly driven by participation in Public Assistance programs. Participants typically receive the items free of charge. The store then recovers the sale price from the state agencies administering the program.
|
US Public Food Assistance 1 - WIC
|
Where does it come from, who spends it, who gets it.
|
Other (specified in description)
| 9
| 12
| null | 0
| 0.012014
| 0.008948
| 0.023988
| 0.607495
| 0.163111
| 0.151098
| 34
|
8,435
| 655
| 772,431
| 772,431
| null | 1,252
| 1,252
| 2,371
|
Dataset
|
01/13/2017 04:18:10
|
02/06/2018
| 102,863
| 16,300
| 435
| 4,610
| 1
|
07/22/2025
| 655
|
# Context
[Pitchfork](https://pitchfork.com/) is a music-centric online magazine. It was started in 1995 and grew out of independent music reviewing into a general publication format, but is still famed for its variety music reviews. I scraped over 18,000 [Pitchfork][1] reviews (going back to January 1999). Initially, this was done to satisfy a few of [my own curiosities][2], but I bet Kagglers can come up with some really interesting analyses!
# Content
This dataset is provided as a `sqlite` database with the following tables: `artists`, `content`, `genres`, `labels`, `reviews`, `years`. For column-level information on specific tables, refer to the [Metadata tab](https://www.kaggle.com/nolanbconaway/pitchfork-data/data).
# Inspiration
* Do review scores for individual artists generally improve over time, or go down?
* How has Pitchfork's review genre selection changed over time?
* Who are the most highly rated artists? The least highly rated artists?
# Acknowledgements
Gotta love [Beautiful Soup][4]!
[1]: http://pitchfork.com/
[2]: https://github.com/nolanbconaway/pitchfork-data
[3]: https://github.com/nolanbconaway/pitchfork-data/tree/master/scrape
[4]: https://www.crummy.com/software/BeautifulSoup/
|
18,393 Pitchfork Reviews
|
Pitchfork reviews from Jan 5, 1999 to Jan 8, 2017
|
Unknown
| 1
| 1
| null | 0
| 0.008509
| 0.007944
| 0.01638
| 0.60618
| 0.159753
| 0.148207
| 35
|
105,419
| 1,120,859
| 6,402,661
| 6,402,661
| null | 1,882,037
| 1,920,174
| 1,138,221
|
Dataset
|
01/26/2021 19:29:28
|
01/26/2021
| 1,589,502
| 259,569
| 3,378
| 1,375
| 1
|
03/10/2021
| 1,120,859
|
### Similar Datasets
- [**HIGHLIGHTED**] CERN Electron Collision Data ☄️[LINK](https://www.kaggle.com/datasets/fedesoriano/cern-electron-collision-data)
- Hepatitis C Dataset: [LINK](https://www.kaggle.com/fedesoriano/hepatitis-c-dataset)
- Body Fat Prediction Dataset: [LINK](https://www.kaggle.com/fedesoriano/body-fat-prediction-dataset)
- Cirrhosis Prediction Dataset: [LINK](https://www.kaggle.com/fedesoriano/cirrhosis-prediction-dataset)
- Heart Failure Prediction Dataset: [LINK](https://www.kaggle.com/fedesoriano/heart-failure-prediction)
- Stellar Classification Dataset - SDSS17: [LINK](https://www.kaggle.com/fedesoriano/stellar-classification-dataset-sdss17)
- Wind Speed Prediction Dataset: [LINK](https://www.kaggle.com/datasets/fedesoriano/wind-speed-prediction-dataset)
- Spanish Wine Quality Dataset: [LINK](https://www.kaggle.com/datasets/fedesoriano/spanish-wine-quality-dataset)
### Context
According to the World Health Organization (WHO) stroke is the 2nd leading cause of death globally, responsible for approximately 11% of total deaths.
This dataset is used to predict whether a patient is likely to get stroke based on the input parameters like gender, age, various diseases, and smoking status. Each row in the data provides relavant information about the patient.
### Attribute Information
1) id: unique identifier
2) gender: "Male", "Female" or "Other"
3) age: age of the patient
4) hypertension: 0 if the patient doesn't have hypertension, 1 if the patient has hypertension
5) heart\_disease: 0 if the patient doesn't have any heart diseases, 1 if the patient has a heart disease
6) ever\_married: "No" or "Yes"
7) work\_type: "children", "Govt\_jov", "Never\_worked", "Private" or "Self-employed"
8) Residence\_type: "Rural" or "Urban"
9) avg\_glucose\_level: average glucose level in blood
10) bmi: body mass index
11) smoking\_status: "formerly smoked", "never smoked", "smokes" or "Unknown"*
12) stroke: 1 if the patient had a stroke or 0 if not
*Note: "Unknown" in smoking\_status means that the information is unavailable for this patient
### Acknowledgements
**(Confidential Source)** - *Use only for educational purposes*
If you use this dataset in your research, please credit the author.
|
Stroke Prediction Dataset
|
11 clinical features for predicting stroke events
|
Data files © Original Authors
| 1
| 1
| null | 0
| 0.131481
| 0.061686
| 0.26084
| 0.180802
| 0.158702
| 0.147301
| 36
|
8,144
| 2,894
| 753,574
| null | 147
| 4,877
| 4,877
| 7,514
|
Dataset
|
10/10/2017 18:05:30
|
02/05/2018
| 202,097
| 19,697
| 832
| 4,346
| 1
|
09/04/2020
| 2,894
|
### Context
The Kepler Space Observatory is a NASA-build satellite that was launched in 2009. The telescope is dedicated to searching for exoplanets in star systems besides our own, with the ultimate goal of possibly finding other habitable planets besides our own. The original mission ended in 2013 due to mechanical failures, but the telescope has nevertheless been functional since 2014 on a "K2" extended mission.
Kepler had verified 1284 new exoplanets as of May 2016. As of October 2017 there are over 3000 confirmed exoplanets total (using all detection methods, including ground-based ones). The telescope is still active and continues to collect new data on its extended mission.
### Content
This dataset is a cumulative record of all observed Kepler "objects of interest" — basically, all of the approximately 10,000 exoplanet candidates Kepler has taken observations on.
This dataset has an extensive data dictionary, which can be accessed [here](https://exoplanetarchive.ipac.caltech.edu/docs/API_kepcandidate_columns.html). Highlightable columns of note are:
* `kepoi_name`: A KOI is a target identified by the Kepler Project that displays at least one transit-like sequence within Kepler time-series photometry that appears to be of astrophysical origin and initially consistent with a planetary transit hypothesis
* `kepler_name`: [These names] are intended to clearly indicate a class of objects that have been confirmed or validated as planets—a step up from the planet candidate designation.
* `koi_disposition`: The disposition in the literature towards this exoplanet candidate. One of CANDIDATE, FALSE POSITIVE, NOT DISPOSITIONED or CONFIRMED.
* `koi_pdisposition`: The disposition Kepler data analysis has towards this exoplanet candidate. One of FALSE POSITIVE, NOT DISPOSITIONED, and CANDIDATE.
* `koi_score `: A value between 0 and 1 that indicates the confidence in the KOI disposition. For CANDIDATEs, a higher value indicates more confidence in its disposition, while for FALSE POSITIVEs, a higher value indicates less confidence in that disposition.
### Acknowledgements
This dataset was published as-is by NASA. You can access the original table [here](https://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblView/nph-tblView?app=ExoTbls&config=koi). More data from the Kepler mission is available from the same source [here](https://exoplanetarchive.ipac.caltech.edu/docs/data.html).
### Inspiration
* How often are exoplanets confirmed in the existing literature disconfirmed by measurements from Kepler? How about the other way round?
* What general characteristics about exoplanets (that we can find) can you derive from this dataset?
* What exoplanets get assigned names in the literature? What is the distribution of confidence scores?
See also: the [Kepler Labeled Time Series](https://www.kaggle.com/keplersmachines/kepler-labelled-time-series-data) and [Open Exoplanets Catalogue](https://www.kaggle.com/mrisdal/open-exoplanet-catalogue) datasets.
|
Kepler Exoplanet Search Results
|
10000 exoplanet candidates examined by the Kepler Space Observatory
|
CC0: Public Domain
| 2
| 2
| null | 0
| 0.016717
| 0.015193
| 0.019793
| 0.571466
| 0.155792
| 0.144786
| 37
|
9,653
| 1,067
| 862,007
| 862,007
| null | 1,925
| 1,925
| 3,045
|
Dataset
|
03/31/2017 06:55:16
|
02/06/2018
| 1,740,485
| 259,953
| 2,776
| 1,177
| 1
|
11/06/2019
| 1,067
|
Uncover the factors that lead to employee attrition and explore important questions such as ‘show me a breakdown of distance from home by job role and attrition’ or ‘compare average monthly income by education and attrition’. This is a fictional data set created by IBM data scientists.
Education
1 'Below College'
2 'College'
3 'Bachelor'
4 'Master'
5 'Doctor'
EnvironmentSatisfaction
1 'Low'
2 'Medium'
3 'High'
4 'Very High'
JobInvolvement
1 'Low'
2 'Medium'
3 'High'
4 'Very High'
JobSatisfaction
1 'Low'
2 'Medium'
3 'High'
4 'Very High'
PerformanceRating
1 'Low'
2 'Good'
3 'Excellent'
4 'Outstanding'
RelationshipSatisfaction
1 'Low'
2 'Medium'
3 'High'
4 'Very High'
WorkLifeBalance
1 'Bad'
2 'Good'
3 'Better'
4 'Best'
|
IBM HR Analytics Employee Attrition & Performance
|
Predict attrition of your valuable employees
|
Database: Open Database, Contents: Database Contents
| 1
| 1
|
simulations
| 1
| 0.14397
| 0.050693
| 0.261225
| 0.154767
| 0.152664
| 0.142076
| 38
|
105,413
| 1,582,403
| 6,402,661
| 6,402,661
| null | 2,603,715
| 2,647,230
| 1,602,483
|
Dataset
|
09/10/2021 18:11:57
|
09/10/2021
| 1,375,970
| 236,288
| 2,970
| 1,514
| 1
|
10/13/2021
| 1,582,403
|
### Similar Datasets
- Hepatitis C Dataset: [LINK](https://www.kaggle.com/fedesoriano/hepatitis-c-dataset)
- Body Fat Prediction Dataset: [LINK](https://www.kaggle.com/fedesoriano/body-fat-prediction-dataset)
- Cirrhosis Prediction Dataset: [LINK](https://www.kaggle.com/fedesoriano/cirrhosis-prediction-dataset)
- Stroke Prediction Dataset: [LINK](https://www.kaggle.com/fedesoriano/stroke-prediction-dataset)
- Stellar Classification Dataset - SDSS17: [LINK](https://www.kaggle.com/fedesoriano/stellar-classification-dataset-sdss17)
- Wind Speed Prediction Dataset: [LINK](https://www.kaggle.com/datasets/fedesoriano/wind-speed-prediction-dataset)
- Spanish Wine Quality Dataset: [LINK](https://www.kaggle.com/datasets/fedesoriano/spanish-wine-quality-dataset)
### Context
Cardiovascular diseases (CVDs) are the number 1 cause of death globally, taking an estimated 17.9 million lives each year, which accounts for 31% of all deaths worldwide. Four out of 5CVD deaths are due to heart attacks and strokes, and one-third of these deaths occur prematurely in people under 70 years of age. Heart failure is a common event caused by CVDs and this dataset contains 11 features that can be used to predict a possible heart disease.
People with cardiovascular disease or who are at high cardiovascular risk (due to the presence of one or more risk factors such as hypertension, diabetes, hyperlipidaemia or already established disease) need early detection and management wherein a machine learning model can be of great help.
### Attribute Information
1. Age: age of the patient [years]
1. Sex: sex of the patient [M: Male, F: Female]
1. ChestPainType: chest pain type [TA: Typical Angina, ATA: Atypical Angina, NAP: Non-Anginal Pain, ASY: Asymptomatic]
1. RestingBP: resting blood pressure [mm Hg]
1. Cholesterol: serum cholesterol [mm/dl]
1. FastingBS: fasting blood sugar [1: if FastingBS > 120 mg/dl, 0: otherwise]
1. RestingECG: resting electrocardiogram results [Normal: Normal, ST: having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV), LVH: showing probable or definite left ventricular hypertrophy by Estes' criteria]
1. MaxHR: maximum heart rate achieved [Numeric value between 60 and 202]
1. ExerciseAngina: exercise-induced angina [Y: Yes, N: No]
1. Oldpeak: oldpeak = ST [Numeric value measured in depression]
1. ST_Slope: the slope of the peak exercise ST segment [Up: upsloping, Flat: flat, Down: downsloping]
1. HeartDisease: output class [1: heart disease, 0: Normal]
### Source
This dataset was created by combining different datasets already available independently but not combined before. In this dataset, 5 heart datasets are combined over 11 common features which makes it the largest heart disease dataset available so far for research purposes. The five datasets used for its curation are:
- Cleveland: 303 observations
- Hungarian: 294 observations
- Switzerland: 123 observations
- Long Beach VA: 200 observations
- Stalog (Heart) Data Set: 270 observations
Total: 1190 observations
Duplicated: 272 observations
`Final dataset: 918 observations`
Every dataset used can be found under the Index of heart disease datasets from UCI Machine Learning Repository on the following link: [https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/](https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/)
### Citation
> fedesoriano. (September 2021). Heart Failure Prediction Dataset. Retrieved [Date Retrieved] from https://www.kaggle.com/fedesoriano/heart-failure-prediction.
### Acknowledgements
Creators:
1. Hungarian Institute of Cardiology. Budapest: Andras Janosi, M.D.
1. University Hospital, Zurich, Switzerland: William Steinbrunn, M.D.
1. University Hospital, Basel, Switzerland: Matthias Pfisterer, M.D.
1. V.A. Medical Center, Long Beach and Cleveland Clinic Foundation: Robert Detrano, M.D., Ph.D.
Donor:
David W. Aha (aha '@' ics.uci.edu) (714) 856-8779
|
Heart Failure Prediction Dataset
|
11 clinical features for predicting heart disease events.
|
Database: Open Database, Contents: © Original Authors
| 1
| 1
| null | 0
| 0.113818
| 0.054236
| 0.237445
| 0.19908
| 0.151144
| 0.140757
| 39
|
8,117
| 3,491
| 753,574
| 753,574
| null | 5,624
| 5,624
| 8,645
|
Dataset
|
10/26/2017 14:10:15
|
02/05/2018
| 97,756
| 14,199
| 301
| 4,367
| 1
|
07/22/2025
| 3,491
|

## Methodology
This is a data dump of the top 100 products (ordered by number of mentions) from every subreddit that has posted an amazon product. The data was extracted from [Google Bigquery's Reddit Comment database](https://bigquery.cloud.google.com/dataset/fh-bigquery:reddit_comments). It only extracts Amazon links, so it is certainly a subset of all products posted to Reddit.
The data is organized in a file structure that follows:
```
reddits/<first lowercase letter of subreddit>/<subreddit>.csv
```
An example of where to find the top products for /r/Watches would be:
```
reddits/w/Watches.csv
```
## Definitions
Below are the column definitions found in each `<subreddit>.csv` file.
**name**
The name of the product as found on Amazon.
**category**
The category of the product as found on Amazon.
**amazon_link**
The link to the product on Amazon.
**total_mentions**
The total number of times that product was found on Reddit.
**subreddit_mentions**
The total number of times that product was found on that subreddit.
## Want more?
You can search and discover products more easily on [ThingsOnReddit](https://thingsonreddit.com/)
## Acknowledgements
This dataset was published by Ben Rudolph on [GitHub](https://github.com/ThingsOnReddit/top-things), and was republished as-is on Kaggle.
|
Things on Reddit
|
The top 100 products in each subreddit from 2015 to 2017
|
Unknown
| 1
| 1
| null | 0
| 0.008086
| 0.005497
| 0.014269
| 0.574227
| 0.15052
| 0.140214
| 40
|
622
| 179,555
| 9,028
| 9,028
| null | 403,916
| 419,024
| 190,343
|
Dataset
|
04/30/2019 21:07:41
|
04/30/2019
| 122,508
| 22,720
| 315
| 4,245
| 1
|
07/22/2025
| 179,555
|
This data is also available at https://www.kaggle.com/open-powerlifting/powerlifting-database
This version of the data was created to ensure a static copy and reproducible results in a Kaggle Learn course.
|
powerlifting-database
|
An unchanging copy of the Powerlifting Database
|
Unknown
| 1
| 1
| null | 0
| 0.010134
| 0.005752
| 0.022831
| 0.558185
| 0.149226
| 0.139088
| 41
|
22,935
| 42,674
| 1,790,645
| 1,790,645
| null | 74,935
| 77,392
| 51,177
|
Dataset
|
08/11/2018 07:23:02
|
08/11/2018
| 1,066,167
| 254,453
| 1,921
| 1,642
| 1
|
02/05/2020
| 42,674
|
### Context
This data set is created only for the learning purpose of the customer segmentation concepts , also known as market basket analysis . I will demonstrate this by using unsupervised ML technique (KMeans Clustering Algorithm) in the simplest form.
### Content
You are owing a supermarket mall and through membership cards , you have some basic data about your customers like Customer ID, age, gender, annual income and spending score.
Spending Score is something you assign to the customer based on your defined parameters like customer behavior and purchasing data.
**Problem Statement**
You own the mall and want to understand the customers like who can be easily converge [Target Customers] so that the sense can be given to marketing team and plan the strategy accordingly.
### Acknowledgements
From Udemy's Machine Learning A-Z course.
I am new to Data science field and want to share my knowledge to others
https://github.com/SteffiPeTaffy/machineLearningAZ/blob/master/Machine%20Learning%20A-Z%20Template%20Folder/Part%204%20-%20Clustering/Section%2025%20-%20Hierarchical%20Clustering/Mall_Customers.csv
### Inspiration
By the end of this case study , you would be able to answer below questions.
1- How to achieve customer segmentation using machine learning algorithm (KMeans Clustering) in Python in simplest way.
2- Who are your target customers with whom you can start marketing strategy [easy to converse]
3- How the marketing strategy works in real world
|
Mall Customer Segmentation Data
|
Market Basket Analysis
|
Other (specified in description)
| 1
| 1
| null | 0
| 0.088191
| 0.03508
| 0.255699
| 0.215911
| 0.14872
| 0.138648
| 42
|
1,763
| 30,292
| 34,547
| 34,547
| null | 38,613
| 40,416
| 38,581
|
Dataset
|
06/06/2018 05:28:35
|
06/06/2018
| 1,520,499
| 330,038
| 3,812
| 502
| 1
|
11/06/2019
| 30,292
|
### Context
It is a well known fact that Millenials LOVE Avocado Toast. It's also a well known fact that all Millenials live in their parents basements.
Clearly, they aren't buying home because they are buying too much Avocado Toast!
But maybe there's hope... if a Millenial could find a city with cheap avocados, they could live out the Millenial American Dream.
### Content
This data was downloaded from the Hass Avocado Board website in May of 2018 & compiled into a single CSV. Here's how the [Hass Avocado Board describes the data on their website][1]:
> The table below represents weekly 2018 retail scan data for National retail volume (units) and price. Retail scan data comes directly from retailers’ cash registers based on actual retail sales of Hass avocados. Starting in 2013, the table below reflects an expanded, multi-outlet retail data set. Multi-outlet reporting includes an aggregation of the following channels: grocery, mass, club, drug, dollar and military. The Average Price (of avocados) in the table reflects a per unit (per avocado) cost, even when multiple units (avocados) are sold in bags. The Product Lookup codes (PLU’s) in the table are only for Hass avocados. Other varieties of avocados (e.g. greenskins) are not included in this table.
Some relevant columns in the dataset:
- `Date` - The date of the observation
- `AveragePrice` - the average price of a single avocado
- `type` - conventional or organic
- `year` - the year
- `Region` - the city or region of the observation
- `Total Volume` - Total number of avocados sold
- `4046` - Total number of avocados with PLU 4046 sold
- `4225` - Total number of avocados with PLU 4225 sold
- `4770` - Total number of avocados with PLU 4770 sold
### Acknowledgements
Many thanks to the Hass Avocado Board for sharing this data!!
http://www.hassavocadoboard.com/retail/volume-and-price-data
### Inspiration
In which cities can millenials have their avocado toast AND buy a home?
Was the Avocadopocalypse of 2017 real?
[1]: http://www.hassavocadoboard.com/retail/volume-and-price-data
|
Avocado Prices
|
Historical data on avocado prices and sales volume in multiple US markets
|
Database: Open Database, Contents: © Original Authors
| 1
| 1
| null | 0
| 0.125773
| 0.069612
| 0.331653
| 0.066009
| 0.148262
| 0.138249
| 43
|
7,167
| 128
| 680,332
| 680,332
| null | 270
| 270
| 1,447
|
Dataset
|
08/25/2016 15:52:49
|
02/06/2018
| 1,214,889
| 228,904
| 2,273
| 1,449
| 1
|
11/06/2019
| 128
|
This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015.
It's a great dataset for evaluating simple regression models.
|
House Sales in King County, USA
|
Predict house price using regression
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.100493
| 0.041508
| 0.230024
| 0.190533
| 0.140639
| 0.131589
| 44
|
7,508
| 5,227
| 710,779
| 710,779
| null | 7,876
| 7,876
| 11,315
|
Dataset
|
11/24/2017 03:14:59
|
02/06/2018
| 1,037,219
| 257,119
| 1,497
| 1,244
| 1
|
12/16/2020
| 5,227
|
### Context
This is the dataset used in the second chapter of Aurélien Géron's recent book 'Hands-On Machine learning with Scikit-Learn and TensorFlow'. It serves as an excellent introduction to implementing machine learning algorithms because it requires rudimentary data cleaning, has an easily understandable list of variables and sits at an optimal size between being to toyish and too cumbersome.
The data contains information from the 1990 California census. So although it may not help you with predicting current housing prices like the Zillow Zestimate dataset, it does provide an accessible introductory dataset for teaching people about the basics of machine learning.
### Content
The data pertains to the houses found in a given California district and some summary stats about them based on the 1990 census data. Be warned the data aren't cleaned so there are some preprocessing steps required! The columns are as follows, their names are pretty self explanitory:
longitude
latitude
housing_median_age
total_rooms
total_bedrooms
population
households
median_income
median_house_value
ocean_proximity
### Acknowledgements
This data was initially featured in the following paper:
Pace, R. Kelley, and Ronald Barry. "Sparse spatial autoregressions." Statistics & Probability Letters 33.3 (1997): 291-297.
and I encountered it in 'Hands-On Machine learning with Scikit-Learn and TensorFlow' by Aurélien Géron.
Aurélien Géron wrote:
This dataset is a modified version of the California Housing dataset available from:
[Luís Torgo's page](http://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.html) (University of Porto)
### Inspiration
See my kernel on machine learning basics in R using this dataset, or venture over to the following link for a python based introductory tutorial: https://github.com/ageron/handson-ml/tree/master/datasets/housing
|
California Housing Prices
|
Median house prices for California districts derived from the 1990 census.
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.085797
| 0.027337
| 0.258378
| 0.163577
| 0.133772
| 0.12555
| 45
|
5,609
| 478
| 495,305
| null | 7
| 974
| 974
| 2,099
|
Dataset
|
12/01/2016 23:08:00
|
02/06/2018
| 1,113,872
| 168,193
| 2,558
| 1,694
| 1
|
11/06/2019
| 478
|
### Context
Although this dataset was originally contributed to the UCI Machine Learning repository nearly 30 years ago, mushroom hunting (otherwise known as "shrooming") is enjoying new peaks in popularity. Learn which features spell certain death and which are most palatable in this dataset of mushroom characteristics. And how certain can your model be?
### Content
This dataset includes descriptions of hypothetical samples corresponding to 23 species of gilled mushrooms in the Agaricus and Lepiota Family Mushroom drawn from The Audubon Society Field Guide to North American Mushrooms (1981). Each species is identified as definitely edible, definitely poisonous, or of unknown edibility and not recommended. This latter class was combined with the poisonous one. The Guide clearly states that there is no simple rule for determining the edibility of a mushroom; no rule like "leaflets three, let it be'' for Poisonous Oak and Ivy.
- **Time period**: Donated to UCI ML 27 April 1987
### Inspiration
- What types of machine learning models perform best on this dataset?
- Which features are most indicative of a poisonous mushroom?
### Acknowledgements
This dataset was originally donated to the UCI Machine Learning repository. You can learn more about past research using the data [here][1].
#[Start a new kernel][2]
[1]: https://archive.ics.uci.edu/ml/datasets/Mushroom
[2]: https://www.kaggle.com/uciml/mushroom-classification/kernels?modal=true
|
Mushroom Classification
|
Safe to eat or deadly poison?
|
CC0: Public Domain
| 1
| 1
|
lending, banking, crowdfunding, insurance, investing, currencies and foreign exchange
| 6
| 0.092137
| 0.046712
| 0.169016
| 0.222748
| 0.132653
| 0.124563
| 46
|
7
| 18
| 500,099
| null | 229
| 2,157
| 2,157
| 993
|
Dataset
|
01/08/2016 21:12:10
|
02/06/2018
| 1,144,926
| 238,492
| 2,385
| 1,107
| 1
|
11/06/2019
| 18
|
## Context
This dataset consists of reviews of fine foods from amazon. The data span a period of more than 10 years, including all ~500,000 reviews up to October 2012. Reviews include product and user information, ratings, and a plain text review. It also includes reviews from all other Amazon categories.
## Contents
- Reviews.csv: Pulled from the corresponding SQLite table named Reviews in database.sqlite<br>
- database.sqlite: Contains the table 'Reviews'<br><br>
Data includes:<br>
- Reviews from Oct 1999 - Oct 2012<br>
- 568,454 reviews<br>
- 256,059 users<br>
- 74,258 products<br>
- 260 users with > 50 reviews<br>
[](https://www.kaggle.com/benhamner/d/snap/amazon-fine-food-reviews/reviews-wordcloud)
## Acknowledgements
See [this SQLite query](https://www.kaggle.com/benhamner/d/snap/amazon-fine-food-reviews/data-sample) for a quick sample of the dataset.
If you publish articles based on this dataset, please cite the following paper:
- J. McAuley and J. Leskovec. [From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews](http://i.stanford.edu/~julian/pdfs/www13.pdf). WWW, 2013.
|
Amazon Fine Food Reviews
|
Analyze ~500,000 food reviews from Amazon
|
CC0: Public Domain
| 2
| 2
| null | 0
| 0.094706
| 0.043553
| 0.239659
| 0.145562
| 0.13087
| 0.122987
| 47
|
36,908
| 268,833
| 3,023,930
| 3,023,930
| null | 611,395
| 630,459
| 280,171
|
Dataset
|
07/18/2019 19:16:23
|
07/18/2019
| 1,320,246
| 224,895
| 3,051
| 882
| 1
|
11/06/2019
| 268,833
|
###Context
Since 2008, guests and hosts have used Airbnb to expand on traveling possibilities and present more unique, personalized way of experiencing the world. This dataset describes the listing activity and metrics in NYC, NY for 2019.
###Content
This data file includes all needed information to find out more about hosts, geographical availability, necessary metrics to make predictions and draw conclusions.
###Acknowledgements
This public dataset is part of Airbnb, and the original source can be found on this [website](http://insideairbnb.com).
###Inspiration
- What can we learn about different hosts and areas?
- What can we learn from predictions? (ex: locations, prices, reviews, etc)
- Which hosts are the busiest and why?
- Is there any noticeable difference of traffic among different areas and what could be the reason for it?
|
New York City Airbnb Open Data
|
Airbnb listings and metrics in NYC, NY, USA (2019)
|
CC0: Public Domain
| 3
| 3
| null | 0
| 0.109208
| 0.055715
| 0.225996
| 0.115976
| 0.126724
| 0.119314
| 48
|
14,762
| 17,860
| 1,272,228
| 1,272,228
| null | 23,404
| 23,408
| 25,592
|
Dataset
|
03/22/2018 15:18:06
|
03/22/2018
| 790,755
| 222,937
| 1,026
| 1,493
| 1
|
07/22/2025
| 17,860
|
### Context
The Iris flower data set is a multivariate data set introduced by the British statistician and biologist Ronald Fisher in his 1936 paper The use of multiple measurements in taxonomic problems. It is sometimes called Anderson's Iris data set because Edgar Anderson collected the data to quantify the morphologic variation of Iris flowers of three related species. The data set consists of 50 samples from each of three species of Iris (Iris Setosa, Iris virginica, and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters.
This dataset became a typical test case for many statistical classification techniques in machine learning such as support vector machines
### Content
The dataset contains a set of 150 records under 5 attributes - Petal Length, Petal Width, Sepal Length, Sepal width and Class(Species).
### Acknowledgements
This dataset is free and is publicly available at the UCI Machine Learning Repository
|
Iris Flower Dataset
|
Iris flower data set used for multi-class classification.
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.06541
| 0.018736
| 0.224028
| 0.196318
| 0.126123
| 0.118781
| 49
|
2,338
| 1,346
| 110,652
| 110,652
| null | 13,233,499
| 13,929,065
| 3,881
|
Dataset
|
06/02/2017 05:09:11
|
02/06/2018
| 1,623,232
| 227,600
| 3,843
| 510
| 1
|
11/06/2019
| 1,346
|
### Context
Bitcoin is the longest running and most well known cryptocurrency, first released as open source in 2009 by the anonymous Satoshi Nakamoto. Bitcoin serves as a decentralized medium of digital exchange, with transactions verified and recorded in a public distributed ledger (the blockchain) without the need for a trusted record keeping authority or central intermediary. Transaction blocks contain a SHA-256 cryptographic hash of previous transaction blocks, and are thus "chained" together, serving as an immutable record of all transactions that have ever occurred. As with any currency/commodity on the market, bitcoin trading and financial instruments soon followed public adoption of bitcoin and continue to grow. Included here is historical bitcoin market data at 1-min intervals for select bitcoin exchanges where trading takes place. Happy (data) mining!
### Content
(See https://github.com/mczielinski/kaggle-bitcoin/ for automation/scraping script)
```
btcusd_1-min_data.csv
```
CSV files for select bitcoin exchanges for the time period of Jan 2012 to Present (Measured by UTC day), with minute to minute updates of OHLC (Open, High, Low, Close) and Volume in BTC.
If a timestamp is missing, or if there are jumps, this may be because the exchange (or its API) was down, the exchange (or its API) did not exist, or some other unforeseen technical error in data reporting or gathering. I'm not perfect, and I'm also busy! All effort has been made to deduplicate entries and verify the contents are correct and complete to the best of my ability, but obviously trust at your own risk.
### Acknowledgements and Inspiration
Bitcoin charts for the data, originally. Now thank you to the Bitstamp API directly. The various exchange APIs, for making it difficult or unintuitive enough to get OHLC and volume data at 1-min intervals that I set out on this data scraping project. Satoshi Nakamoto and the novel core concept of the blockchain, as well as its first execution via the bitcoin protocol. I'd also like to thank viewers like you! Can't wait to see what code or insights you all have to share.
|
Bitcoin Historical Data
|
Bitcoin data at 1-min intervals from select exchanges, Jan 2012 to Present
|
CC BY-SA 4.0
| 375
| 361
| null | 0
| 0.134271
| 0.070178
| 0.228714
| 0.067061
| 0.125056
| 0.117833
| 50
|
2,065
| 557,629
| 71,388
| 71,388
| null | 2,516,524
| 2,559,305
| 571,269
|
Dataset
|
03/16/2020 06:24:37
|
03/16/2020
| 1,014,839
| 298,365
| 2,085
| 594
| 1
|
04/05/2020
| 557,629
|
### Context
Coronaviruses are a large family of viruses which may cause illness in animals or humans. In humans, several coronaviruses are known to cause respiratory infections ranging from the common cold to more severe diseases such as Middle East Respiratory Syndrome (MERS) and Severe Acute Respiratory Syndrome (SARS). The most recently discovered coronavirus causes coronavirus disease COVID-19 - World Health Organization
The number of new cases are increasing day by day around the world. This dataset has information from the states and union territories of India at daily level.
State level data comes from [Ministry of Health & Family Welfare](https://www.mohfw.gov.in/)
Testing data and vaccination data comes from [covid19india](https://www.covid19india.org/). Huge thanks to them for their efforts!
Update on April 20, 2021: Thanks to the [Team at ISIBang](https://www.isibang.ac.in/~athreya/incovid19/), I was able to get the historical data for the periods that I missed to collect and updated the csv file.
### Content
COVID-19 cases at daily level is present in `covid_19_india.csv` file
Statewise testing details in `StatewiseTestingDetails.csv` file
Travel history dataset by @dheerajmpai - https://www.kaggle.com/dheerajmpai/covidindiatravelhistory
### Acknowledgements
Thanks to Indian [Ministry of Health & Family Welfare](https://www.mohfw.gov.in/) for making the data available to general public.
Thanks to [covid19india.org](http://portal.covid19india.org/) for making the individual level details, testing details, vaccination details available to general public.
Thanks to [Wikipedia](https://en.wikipedia.org/wiki/List_of_states_and_union_territories_of_India_by_population) for population information.
Thanks to the [Team at ISIBang](https://www.isibang.ac.in/~athreya/incovid19/)
Photo Courtesy - https://hgis.uw.edu/virus/
### Inspiration
Looking for data based suggestions to stop / delay the spread of virus
|
COVID-19 in India
|
Dataset on Novel Corona Virus Disease 2019 in India
|
CC0: Public Domain
| 237
| 237
| null | 0
| 0.083946
| 0.038075
| 0.299825
| 0.078107
| 0.124988
| 0.117772
| 51
|
38,761
| 216,167
| 3,308,439
| 3,308,439
| null | 477,177
| 493,143
| 227,259
|
Dataset
|
06/04/2019 02:58:45
|
06/04/2019
| 1,247,947
| 265,768
| 1,501
| 758
| 1
|
07/22/2025
| 216,167
|
### Context
This data set dates from 1988 and consists of four databases: Cleveland, Hungary, Switzerland, and Long Beach V. It contains 76 attributes, including the predicted attribute, but all published experiments refer to using a subset of 14 of them. The "target" field refers to the presence of heart disease in the patient. It is integer valued 0 = no disease and 1 = disease.
### Content
Attribute Information:
> 1. age
> 2. sex
> 3. chest pain type (4 values)
> 4. resting blood pressure
> 5. serum cholestoral in mg/dl
> 6. fasting blood sugar > 120 mg/dl
> 7. resting electrocardiographic results (values 0,1,2)
> 8. maximum heart rate achieved
> 9. exercise induced angina
> 10. oldpeak = ST depression induced by exercise relative to rest
> 11. the slope of the peak exercise ST segment
> 12. number of major vessels (0-3) colored by flourosopy
> 13. thal: 0 = normal; 1 = fixed defect; 2 = reversable defect
The names and social security numbers of the patients were recently removed from the database, replaced with dummy values.
|
Heart Disease Dataset
|
Public Health Dataset
|
Unknown
| 2
| 1
| null | 0
| 0.103228
| 0.02741
| 0.267069
| 0.099671
| 0.124345
| 0.1172
| 52
|
2,343
| 2,477
| 111,640
| 111,640
| null | 4,140
| 4,140
| 6,554
|
Dataset
|
09/13/2017 22:07:02
|
02/06/2018
| 1,128,885
| 242,898
| 2,243
| 833
| 1
|
11/06/2019
| 2,477
|
### Context
This is the sentiment140 dataset. It contains 1,600,000 tweets extracted using the twitter api . The tweets have been annotated (0 = negative, 4 = positive) and they can be used to detect sentiment .
### Content
It contains the following 6 fields:
1. **target**: the polarity of the tweet (*0* = negative, *2* = neutral, *4* = positive)
2. **ids**: The id of the tweet ( *2087*)
3. **date**: the date of the tweet (*Sat May 16 23:58:44 UTC 2009*)
4. **flag**: The query (*lyx*). If there is no query, then this value is NO_QUERY.
5. **user**: the user that tweeted (*robotickilldozr*)
6. **text**: the text of the tweet (*Lyx is cool*)
### Acknowledgements
The official link regarding the dataset with resources about how it was generated is [here][1]
The official paper detailing the approach is [here][2]
Citation: Go, A., Bhayani, R. and Huang, L., 2009. Twitter sentiment classification using distant supervision. *CS224N Project Report, Stanford, 1(2009), p.12*.
### Inspiration
To detect severity from tweets. You [may have a look at this][3].
[1]: http://%20http://help.sentiment140.com/for-students/
[2]: http://bhttp://cs.stanford.edu/people/alecmgo/papers/TwitterDistantSupervision09.pdf
[3]: https://www.linkedin.com/pulse/social-machine-learning-h2o-twitter-python-marios-michailidis
|
Sentiment140 dataset with 1.6 million tweets
|
Sentiment analysis with tweets
|
Other (specified in description)
| 2
| 2
| null | 0
| 0.093379
| 0.04096
| 0.244087
| 0.109533
| 0.12199
| 0.115104
| 53
|
32,694
| 116,573
| 2,603,295
| 2,603,295
| null | 3,551,030
| 3,604,079
| 126,445
|
Dataset
|
02/06/2019 18:20:07
|
02/06/2019
| 160,109
| 101,599
| 371
| 2,714
| 2
|
07/07/2025
| 116,573
|
Dataset for Kaggle's [Data Visualization](https://www.kaggle.com/learn/data-visualization) course
|
Interesting Data to Visualize
|
For Kaggle's Data Visualization Course
|
Unknown
| 2
| 4
| null | 0
| 0.013244
| 0.006775
| 0.102096
| 0.35687
| 0.119746
| 0.113102
| 54
|
1,982
| 54,339
| 67,483
| 67,483
| null | 104,884
| 111,874
| 63,066
|
Dataset
|
09/19/2018 13:42:20
|
09/19/2018
| 1,084,932
| 213,831
| 2,138
| 1,024
| 1
|
11/06/2019
| 54,339
|
# Overview
Another more interesting than digit classification dataset to use to get biology and medicine students more excited about machine learning and image processing.
## Original Data Source
- Original Challenge: https://challenge2018.isic-archive.com
- https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/DBW86T
[1] Noel Codella, Veronica Rotemberg, Philipp Tschandl, M. Emre Celebi, Stephen Dusza, David Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael Marchetti, Harald Kittler, Allan Halpern: “Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC)”, 2018; https://arxiv.org/abs/1902.03368
[2] Tschandl, P., Rosendahl, C. & Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 5, 180161 doi:10.1038/sdata.2018.161 (2018).
## From Authors
Training of neural networks for automated diagnosis of pigmented skin lesions is hampered by the small size and lack of diversity of available dataset of dermatoscopic images. We tackle this problem by releasing the HAM10000 ("Human Against Machine with 10000 training images") dataset. We collected dermatoscopic images from different populations, acquired and stored by different modalities. The final dataset consists of 10015 dermatoscopic images which can serve as a training set for academic machine learning purposes. Cases include a representative collection of all important diagnostic categories in the realm of pigmented lesions: Actinic keratoses and intraepithelial carcinoma / Bowen's disease (akiec), basal cell carcinoma (bcc), benign keratosis-like lesions (solar lentigines / seborrheic keratoses and lichen-planus like keratoses, bkl), dermatofibroma (df), melanoma (mel), melanocytic nevi (nv) and vascular lesions (angiomas, angiokeratomas, pyogenic granulomas and hemorrhage, vasc).
More than 50% of lesions are confirmed through histopathology (histo), the ground truth for the rest of the cases is either follow-up examination (follow_up), expert consensus (consensus), or confirmation by in-vivo confocal microscopy (confocal). The dataset includes lesions with multiple images, which can be tracked by the lesion_id-column within the HAM10000_metadata file.
The test set is not public, but the evaluation server remains running (see the challenge website). Any publications written using the HAM10000 data should be evaluated on the official test set hosted there, so that methods can be fairly compared.
|
Skin Cancer MNIST: HAM10000
|
a large collection of multi-source dermatoscopic images of pigmented lesions
|
CC BY-NC-SA 4.0
| 2
| 2
|
finance, real estate, marketing, ratings and reviews, e-commerce services, travel
| 6
| 0.089744
| 0.039042
| 0.214878
| 0.134648
| 0.119578
| 0.112952
| 55
|
17,028
| 5,857
| 1,437,850
| 1,437,850
| null | 13,208,880
| 13,902,179
| 12,161
|
Dataset
|
12/02/2017 08:34:41
|
02/06/2018
| 1,296,086
| 201,710
| 3,254
| 676
| 1
|
11/06/2019
| 5,857
|
# Fruits-360 dataset: A dataset of images containing fruits, vegetables, nuts and seeds
## Version: 2025.09.29.0
## Content
The following fruits, vegetables and nuts and are included:
Apples (different varieties: Crimson Snow, Golden, Golden-Red, Granny Smith, Pink Lady, Red, Red Delicious), Apricot, Avocado, Avocado ripe, Banana (Yellow, Red, Lady Finger), Beans, Beetroot Red, Blackberry, Blueberry, Cabbage, Caju seed, Cactus fruit, Cantaloupe (2 varieties), Carambula, Carrot, Cauliflower, Cherimoya, Cherry (different varieties, Rainier), Cherry Wax (Yellow, Red, Black), Chestnut, Clementine, Cocos, Corn (with husk), Cucumber (ripened, regular), Dates, Eggplant, Fig, Ginger Root, Goosberry, Granadilla, Grape (Blue, Pink, White (different varieties)), Grapefruit (Pink, White), Guava, Hazelnut, Huckleberry, Kiwi, Kaki, Kohlrabi, Kumsquats, Lemon (normal, Meyer), Lime, Lychee, Mandarine, Mango (Green, Red), Mangostan, Maracuja, Melon Piel de Sapo, Mulberry, Nectarine (Regular, Flat), Nut (Forest, Pecan), Onion (Red, White), Orange, Papaya, Passion fruit, Peach (different varieties), Pepino, Pear (different varieties, Abate, Forelle, Kaiser, Monster, Red, Stone, Williams), Pepper (Red, Green, Orange, Yellow), Physalis (normal, with Husk), Pineapple (normal, Mini), Pistachio, Pitahaya Red, Plum (different varieties), Pomegranate, Pomelo Sweetie, Potato (Red, Sweet, White), Quince, Rambutan, Raspberry, Redcurrant, Salak, Strawberry (normal, Wedge), Tamarillo, Tangelo, Tomato (different varieties, Maroon, Cherry Red, Yellow, not ripened, Heart), Walnut, Watermelon, Zucchini (green and dark).
## Branches
The dataset has 5 major branches:
-The _100x100_ branch, where all images have 100x100 pixels. See _fruits-360_100x100_ folder.
-The _original-size_ branch, where all images are at their original (captured) size. See _fruits-360_original-size_ folder.
-The _meta_ branch, which contains additional information about the objects in the Fruits-360 dataset. See _fruits-360_dataset_meta_ folder.
-The _multi_ branch, which contains images with multiple fruits, vegetables, nuts and seeds. These images are not labeled. See _fruits-360_multi_ folder.
-The _3_body_problem_ branch where the Training and Test folders contain different (varieties of) the 3 fruits and vegetables (Apples, Cherries and Tomatoes). See _fruits-360_3-body-problem_ folder.
## How to cite
[Mihai Oltean](https://mihaioltean.github.io), Fruits-360 dataset, 2017-
## Dataset properties
### For the _100x100_ branch
Total number of images: 150804.
Training set size: 113083 images.
Test set size: 37721 images.
Number of classes: 219 (fruits, vegetables, nuts and seeds).
Image size: 100x100 pixels.
### For the _original-size_ branch
Total number of images: 69227.
Training set size: 35283 images.
Validation set size: 17643 images
Test set size: 17537 images.
Number of classes: 104 (fruits, vegetables, nuts and seeds).
Image size: various (original, captured, size) pixels.
### For the _3-body-problem_ branch
Total number of images: 47033.
Training set size: 34800 images.
Test set size: 12233 images.
Number of classes: 3 (Apples, Cherries, Tomatoes).
Number of varieties: Apples = 29; Cherries = 12; Tomatoes = 19.
Image size: 100x100 pixels.
### For the _meta_ branch
Number of classes: 26 (fruits, vegetables, nuts and seeds).
### For the _multi_ branch
Number of images: 150.
## Filename format:
### For the _100x100_ branch
image_index_100.jpg (e.g. 31_100.jpg) or
r_image_index_100.jpg (e.g. r_31_100.jpg) or
r?_image_index_100.jpg (e.g. r2_31_100.jpg)
where "r" stands for rotated fruit. "r2" means that the fruit was rotated around the 3rd axis. "100" comes from image size (100x100 pixels).
Different varieties of the same fruit (apple, for instance) are stored as belonging to different classes.
### For the _original-size_ branch
r?_image_index.jpg (e.g. r2_31.jpg)
where "r" stands for rotated fruit. "r2" means that the fruit was rotated around the 3rd axis.
The name of the image files in the new version does NOT contain the "\_100" suffix anymore. This will help you to make the distinction between the _original-size_ branch and the _100x100_ branch.
### For the _multi_ branch
The file's name is the concatenation of the names of the fruits inside that picture.
## Alternate download
The Fruits-360 dataset can be downloaded from:
**Kaggle** [https://www.kaggle.com/moltean/fruits](https://www.kaggle.com/moltean/fruits)
**GitHub** [https://github.com/fruits-360](https://github.com/fruits-360)
## How fruits were filmed
Fruits and vegetables were planted in the shaft of a low-speed motor (3 rpm) and a short movie of 20 seconds was recorded.
A Logitech C920 camera was used for filming the fruits. This is one of the best webcams available.
Behind the fruits, we placed a white sheet of paper as a background.
Here is a movie showing how the fruits and vegetables are filmed: https://youtu.be/_HFKJ144JuU
### How fruits were extracted from the background
However, due to the variations in the lighting conditions, the background was not uniform and we wrote a dedicated algorithm that extracts the fruit from the background. This algorithm is of flood fill type: we start from each edge of the image and we mark all pixels there, then we mark all pixels found in the neighborhood of the already marked pixels for which the distance between colors is less than a prescribed value. We repeat the previous step until no more pixels can be marked.
All marked pixels are considered as being background (which is then filled with white) and the rest of the pixels are considered as belonging to the object.
The maximum value for the distance between 2 neighbor pixels is a parameter of the algorithm and is set (by trial and error) for each movie.
Pictures from the test-multiple_fruits folder were taken with a Nexus 5X phone or an iPhone 11.
## History ###
Fruits were filmed at the dates given below (YYYY.MM.DD):
2017.02.25 - First fruit filmed (Apple golden).
2023.12.30 - Official Github repository is now [https://github.com/fruits-360](https://github.com/fruits-360)
2025.04.21 - Fruits-360-3-body-problem uploaded.
## License
CC BY-SA 4.0
Copyright (c) 2017-, [Mihai Oltean](https://mihaioltean.github.io)
You are free to:
*Share* — copy and redistribute the material in any medium or format for any purpose, even commercially.
*Adapt* — remix, transform, and build upon the material for any purpose, even commercially.
Under the following terms:
*Attribution* — You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
*ShareAlike* — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
|
Fruits-360 dataset
|
A dataset with 150804 images of 219 fruits, vegetables, nuts and seeds
|
CC BY-SA 4.0
| 55
| 116
| null | 0
| 0.10721
| 0.059422
| 0.202697
| 0.088889
| 0.114554
| 0.108455
| 56
|
8,660
| 727,551
| 793,761
| 793,761
| null | 1,263,738
| 1,295,676
| 742,394
|
Dataset
|
06/20/2020 01:03:20
|
06/20/2020
| 1,111,531
| 170,287
| 2,423
| 1,104
| 1
|
08/22/2020
| 727,551
|
# About this dataset
> Cardiovascular diseases (CVDs) are the **number 1 cause of death globally**, taking an estimated **17.9 million lives each year**, which accounts for **31% of all deaths worlwide**.
Heart failure is a common event caused by CVDs and this dataset contains 12 features that can be used to predict mortality by heart failure.
> Most cardiovascular diseases can be prevented by addressing behavioural risk factors such as tobacco use, unhealthy diet and obesity, physical inactivity and harmful use of alcohol using population-wide strategies.
> People with cardiovascular disease or who are at high cardiovascular risk (due to the presence of one or more risk factors such as hypertension, diabetes, hyperlipidaemia or already established disease) need **early detection** and management wherein a machine learning model can be of great help.
# How to use this dataset
> - Create a model for predicting mortality caused by Heart Failure.
- Your kernel can be featured here!
- [More datasets](https://www.kaggle.com/andrewmvd/datasets)
# Acknowledgements
If you use this dataset in your research, please credit the authors
> ### Citation
Davide Chicco, Giuseppe Jurman: Machine learning can predict survival of patients with heart failure from serum creatinine and ejection fraction alone. BMC Medical Informatics and Decision Making 20, 16 (2020). ([link](https://doi.org/10.1186/s12911-020-1023-5))
> ### License
CC BY 4.0
> ### Splash icon
Icon by [Freepik](https://www.flaticon.com/authors/freepik), available on [Flaticon](https://www.flaticon.com/free-icon/heart_1186541).
> ### Splash banner
Wallpaper by [jcomp](https://br.freepik.com/jcomp), available on [Freepik](https://br.freepik.com/fotos-gratis/simplesmente-design-minimalista-com-estetoscopio-de-equipamento-de-medicina-ou-phonendoscope_5018002.htm#page=1&query=cardiology&position=3).
|
Heart Failure Prediction
|
12 clinical features por predicting death events.
|
Attribution 4.0 International (CC BY 4.0)
| 1
| 1
| null | 0
| 0.091944
| 0.044247
| 0.171121
| 0.145168
| 0.11312
| 0.107167
| 57
|
6,048
| 4,104
| 548,353
| 548,353
| null | 16,930
| 16,930
| 9,656
|
Dataset
|
11/06/2017 16:30:32
|
02/06/2018
| 521,930
| 91,355
| 1,630
| 2,179
| 1
|
11/06/2019
| 4,104
|
### Context
I'm a crowdfunding enthusiast and i'm watching kickstarter since its early days. Right now I just collect data and the only app i've made is this twitter bot which tweet any project reaching some milestone: @bloomwatcher . I have a lot of other ideas, but sadly not enough time to develop them... But I hope you can!
### Content
You'll find most useful data for project analysis. Columns are self explanatory except:
- usd_pledged: conversion in US dollars of the pledged column (conversion done by kickstarter).
- usd pledge real: conversion in US dollars of the pledged column (conversion from [Fixer.io API][1]).
- usd goal real: conversion in US dollars of the goal column (conversion from [Fixer.io API][1]).
### Acknowledgements
Data are collected from [Kickstarter Platform][2]
usd conversion (usd_pledged_real and usd_goal_real columns) were generated from [convert ks pledges to usd][3] script done by [tonyplaysguitar][4]
### Inspiration
I hope to see great projects, and why not a model to predict if a project will be successful before it is released? :)
[1]: http://Fixer.io
[2]: https://www.kickstarter.com/
[3]: https://www.kaggle.com/tonyplaysguitar/convert-ks-pledges-to-usd/
[4]: https://www.kaggle.com/tonyplaysguitar
|
Kickstarter Projects
|
More than 300,000 kickstarter projects
|
CC BY-NC-SA 4.0
| 7
| 7
| null | 0
| 0.043173
| 0.029766
| 0.091802
| 0.286522
| 0.112816
| 0.106894
| 58
|
4,550
| 1,985
| 407,457
| 407,457
| null | 3,404
| 3,404
| 5,692
|
Dataset
|
08/17/2017 02:44:30
|
02/06/2018
| 1,336,284
| 234,872
| 1,945
| 442
| 1
|
01/31/2020
| 1,985
|
### Context
Typically e-commerce datasets are proprietary and consequently hard to find among publicly available data. However, [The UCI Machine Learning Repository][1] has made this dataset containing actual transactions from 2010 and 2011. The dataset is maintained on their site, where it can be found by the title "Online Retail".
### Content
"This is a transnational data set which contains all the transactions occurring between 01/12/2010 and 09/12/2011 for a UK-based and registered non-store online retail.The company mainly sells unique all-occasion gifts. Many customers of the company are wholesalers."
### Acknowledgements
Per the UCI Machine Learning Repository, this data was made available by Dr Daqing Chen, Director: Public Analytics group. chend '@' lsbu.ac.uk, School of Engineering, London South Bank University, London SE1 0AA, UK.
Image from stocksnap.io.
### Inspiration
Analyses for this dataset could include time series, clustering, classification and more.
[1]: http://archive.ics.uci.edu/ml/index.php
|
E-Commerce Data
|
Actual transactions from UK retailer
|
Unknown
| 1
| 1
| null | 0
| 0.110535
| 0.035518
| 0.236022
| 0.05812
| 0.110049
| 0.104404
| 59
|
7,331
| 121
| 693,375
| 693,375
| null | 280
| 280
| 1,440
|
Dataset
|
08/22/2016 22:44:53
|
02/06/2018
| 844,510
| 147,515
| 2,681
| 1,314
| 1
|
11/06/2019
| 121
|
This data set includes 721 Pokemon, including their number, name, first and second type, and basic stats: HP, Attack, Defense, Special Attack, Special Defense, and Speed. It has been of great use when teaching statistics to kids. With certain types you can also give a geeky introduction to machine learning.
This are the raw attributes that are used for calculating how much damage an attack will do in the games. This dataset is about the pokemon games (*NOT* pokemon cards or Pokemon Go).
The data as described by [Myles O'Neill](https://www.kaggle.com/mylesoneill) is:
- **#**: ID for each pokemon
- **Name**: Name of each pokemon
- **Type 1**: Each pokemon has a type, this determines weakness/resistance to attacks
- **Type 2**: Some pokemon are dual type and have 2
- **Total**: sum of all stats that come after this, a general guide to how strong a pokemon is
- **HP**: hit points, or health, defines how much damage a pokemon can withstand before fainting
- **Attack**: the base modifier for normal attacks (eg. Scratch, Punch)
- **Defense**: the base damage resistance against normal attacks
- **SP Atk**: special attack, the base modifier for special attacks (e.g. fire blast, bubble beam)
- **SP Def**: the base damage resistance against special attacks
- **Speed**: determines which pokemon attacks first each round
The data for this table has been acquired from several different sites, including:
- [pokemon.com](http://www.pokemon.com/us/pokedex/)
- [pokemondb](http://pokemondb.net/pokedex)
- [bulbapedia](http://bulbapedia.bulbagarden.net/wiki/List_of_Pok%C3%A9mon_by_National_Pok%C3%A9dex_number)
One question has been answered with this database: The type of a pokemon cannot be inferred only by it's Attack and Deffence. It would be worthy to find which two variables can define the type of a pokemon, if any. Two variables can be plotted in a 2D space, and used as an example for machine learning. This could mean the creation of a visual example any geeky Machine Learning class would love.
|
Pokemon with stats
|
721 Pokemon with stats and types
|
CC0: Public Domain
| 2
| 2
| null | 0
| 0.069856
| 0.048958
| 0.148237
| 0.172781
| 0.109958
| 0.104322
| 60
|
94,042
| 1,546,318
| 5,270,083
| 5,270,083
| null | 2,549,419
| 2,592,425
| 1,566,216
|
Dataset
|
08/22/2021 18:15:05
|
08/22/2021
| 1,062,787
| 212,570
| 2,820
| 636
| 1
|
09/23/2021
| 1,546,318
|
### Context
**Problem Statement**
Customer Personality Analysis is a detailed analysis of a company’s ideal customers. It helps a business to better understand its customers and makes it easier for them to modify products according to the specific needs, behaviors and concerns of different types of customers.
Customer personality analysis helps a business to modify its product based on its target customers from different types of customer segments. For example, instead of spending money to market a new product to every customer in the company’s database, a company can analyze which customer segment is most likely to buy the product and then market the product only on that particular segment.
### Content
**Attributes**
**People**
* ID: Customer's unique identifier
* Year_Birth: Customer's birth year
* Education: Customer's education level
* Marital_Status: Customer's marital status
* Income: Customer's yearly household income
* Kidhome: Number of children in customer's household
* Teenhome: Number of teenagers in customer's household
* Dt_Customer: Date of customer's enrollment with the company
* Recency: Number of days since customer's last purchase
* Complain: 1 if the customer complained in the last 2 years, 0 otherwise
**Products**
* MntWines: Amount spent on wine in last 2 years
* MntFruits: Amount spent on fruits in last 2 years
* MntMeatProducts: Amount spent on meat in last 2 years
* MntFishProducts: Amount spent on fish in last 2 years
* MntSweetProducts: Amount spent on sweets in last 2 years
* MntGoldProds: Amount spent on gold in last 2 years
**Promotion**
* NumDealsPurchases: Number of purchases made with a discount
* AcceptedCmp1: 1 if customer accepted the offer in the 1st campaign, 0 otherwise
* AcceptedCmp2: 1 if customer accepted the offer in the 2nd campaign, 0 otherwise
* AcceptedCmp3: 1 if customer accepted the offer in the 3rd campaign, 0 otherwise
* AcceptedCmp4: 1 if customer accepted the offer in the 4th campaign, 0 otherwise
* AcceptedCmp5: 1 if customer accepted the offer in the 5th campaign, 0 otherwise
* Response: 1 if customer accepted the offer in the last campaign, 0 otherwise
**Place**
* NumWebPurchases: Number of purchases made through the company’s website
* NumCatalogPurchases: Number of purchases made using a catalogue
* NumStorePurchases: Number of purchases made directly in stores
* NumWebVisitsMonth: Number of visits to company’s website in the last month
### Target
Need to perform clustering to summarize customer segments.
### Acknowledgement
The dataset for this project is provided by Dr. Omar Romero-Hernandez.
### Solution
You can take help from following link to know more about the approach to solve this problem.
[Visit this URL ](https://thecleverprogrammer.com/2021/02/08/customer-personality-analysis-with-python/)
### Inspiration
happy learning....
**Hope you like this dataset please don't forget to like this dataset**
|
Customer Personality Analysis
|
Analysis of company's ideal customers
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.087912
| 0.051497
| 0.21361
| 0.083629
| 0.109162
| 0.103605
| 61
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 15