Unnamed: 0
int64 0
541k
| Id
int64 6
8.39M
| CreatorUserId
int64 1
29.3M
| OwnerUserId
float64 368
29.3M
โ | OwnerOrganizationId
float64 2
5.18k
โ | CurrentDatasetVersionId
float64 58
13.2M
โ | CurrentDatasourceVersionId
float64 58
13.9M
โ | ForumId
int64 762
8.95M
| Type
stringclasses 1
value | CreationDate
stringlengths 19
19
| LastActivityDate
stringlengths 10
10
| TotalViews
int64 0
12.1M
| TotalDownloads
int64 0
995k
| TotalVotes
int64 0
54.8k
| TotalKernels
int64 0
7.61k
| Medal
float64 1
3
โ | MedalAwardDate
stringlengths 10
10
โ | DatasetId
int64 6
8.39M
| Description
stringlengths 1
365k
โ | Title
stringlengths 2
57
โ | Subtitle
stringlengths 1
168
โ | LicenseName
stringclasses 32
values | VersionNumber
float64 1
16.2k
โ | VersionChangesCount
int64 0
16.2k
| AllTags
stringclasses 48
values | AllTagsCount
int64 0
11
| NormViews
float64 0
1
| NormVotes
float64 0
1
| NormDownloads
float64 0
1
| NormKernels
float64 0
1
| CombinedScore
float64 0
0.74
| LogCombinedScore
float64 0
0.55
| Rank_CombinedScore
int64 1
242k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
648
| 310
| 14,069
| null | 1,160
| 23,498
| 23,502
| 1,838
|
Dataset
|
11/03/2016 13:21:36
|
02/06/2018
| 12,089,246
| 995,129
| 12,501
| 5,571
| 1
|
11/06/2019
| 310
|
Context
---------
It is important that credit card companies are able to recognize fraudulent credit card transactions so that customers are not charged for items that they did not purchase.
Content
---------
The dataset contains transactions made by credit cards in September 2013 by European cardholders.
This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions.
It contains only numerical input variables which are the result of a PCA transformation. Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data. Features V1, V2, ... V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. The feature 'Amount' is the transaction Amount, this feature can be used for example-dependant cost-sensitive learning. Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise.
Given the class imbalance ratio, we recommend measuring the accuracy using the Area Under the Precision-Recall Curve (AUPRC). Confusion matrix accuracy is not meaningful for unbalanced classification.
Update (03/05/2021)
---------
A simulator for transaction data has been released as part of the practical handbook on Machine Learning for Credit Card Fraud Detection - https://fraud-detection-handbook.github.io/fraud-detection-handbook/Chapter_3_GettingStarted/SimulatedDataset.html. We invite all practitioners interested in fraud detection datasets to also check out this data simulator, and the methodologies for credit card fraud detection presented in the book.
Acknowledgements
---------
The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Universitรฉ Libre de Bruxelles) on big data mining and fraud detection.
More details on current and past projects on related topics are available on [https://www.researchgate.net/project/Fraud-detection-5][1] and the page of the [DefeatFraud][2] project
Please cite the following works:
Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. [Calibrating Probability with Undersampling for Unbalanced Classification.][3] In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015
Dal Pozzolo, Andrea; Caelen, Olivier; Le Borgne, Yann-Ael; Waterschoot, Serge; Bontempi, Gianluca. [Learned lessons in credit card fraud detection from a practitioner perspective][4], Expert systems with applications,41,10,4915-4928,2014, Pergamon
Dal Pozzolo, Andrea; Boracchi, Giacomo; Caelen, Olivier; Alippi, Cesare; Bontempi, Gianluca. [Credit card fraud detection: a realistic modeling and a novel learning strategy,][5] IEEE transactions on neural networks and learning systems,29,8,3784-3797,2018,IEEE
Dal Pozzolo, Andrea [Adaptive Machine learning for credit card fraud detection][6] ULB MLG PhD thesis (supervised by G. Bontempi)
Carcillo, Fabrizio; Dal Pozzolo, Andrea; Le Borgne, Yann-Aรซl; Caelen, Olivier; Mazzer, Yannis; Bontempi, Gianluca. [Scarff: a scalable framework for streaming credit card fraud detection with Spark][7], Information fusion,41, 182-194,2018,Elsevier
Carcillo, Fabrizio; Le Borgne, Yann-Aรซl; Caelen, Olivier; Bontempi, Gianluca. [Streaming active learning strategies for real-life credit card fraud detection: assessment and visualization,][8] International Journal of Data Science and Analytics, 5,4,285-300,2018,Springer International Publishing
Bertrand Lebichot, Yann-Aรซl Le Borgne, Liyun He, Frederic Oblรฉ, Gianluca Bontempi [Deep-Learning Domain Adaptation Techniques for Credit Cards Fraud Detection](https://www.researchgate.net/publication/332180999_Deep-Learning_Domain_Adaptation_Techniques_for_Credit_Cards_Fraud_Detection), INNSBDDL 2019: Recent Advances in Big Data and Deep Learning, pp 78-88, 2019
Fabrizio Carcillo, Yann-Aรซl Le Borgne, Olivier Caelen, Frederic Oblรฉ, Gianluca Bontempi [Combining Unsupervised and Supervised Learning in Credit Card Fraud Detection ](https://www.researchgate.net/publication/333143698_Combining_Unsupervised_and_Supervised_Learning_in_Credit_Card_Fraud_Detection) Information Sciences, 2019
Yann-Aรซl Le Borgne, Gianluca Bontempi [Reproducible machine Learning for Credit Card Fraud Detection - Practical Handbook ](https://www.researchgate.net/publication/351283764_Machine_Learning_for_Credit_Card_Fraud_Detection_-_Practical_Handbook)
Bertrand Lebichot, Gianmarco Paldino, Wissam Siblini, Liyun He, Frederic Oblรฉ, Gianluca Bontempi [Incremental learning strategies for credit cards fraud detection](https://www.researchgate.net/publication/352275169_Incremental_learning_strategies_for_credit_cards_fraud_detection), IInternational Journal of Data Science and Analytics
[1]: https://www.researchgate.net/project/Fraud-detection-5
[2]: https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/
[3]: https://www.researchgate.net/publication/283349138_Calibrating_Probability_with_Undersampling_for_Unbalanced_Classification
[4]: https://www.researchgate.net/publication/260837261_Learned_lessons_in_credit_card_fraud_detection_from_a_practitioner_perspective
[5]: https://www.researchgate.net/publication/319867396_Credit_Card_Fraud_Detection_A_Realistic_Modeling_and_a_Novel_Learning_Strategy
[6]: http://di.ulb.ac.be/map/adalpozz/pdf/Dalpozzolo2015PhD.pdf
[7]: https://www.researchgate.net/publication/319616537_SCARFF_a_Scalable_Framework_for_Streaming_Credit_Card_Fraud_Detection_with_Spark
[8]: https://www.researchgate.net/publication/332180999_Deep-Learning_Domain_Adaptation_Techniques_for_Credit_Cards_Fraud_Detection
|
Credit Card Fraud Detection
|
Anonymized credit card transactions labeled as fraudulent or genuine
|
Database: Open Database, Contents: Database Contents
| 3
| 3
| null | 0
| 1
| 0.228283
| 1
| 0.732544
| 0.740207
| 0.554004
| 1
|
8
| 19
| 1
| null | 7
| 420
| 420
| 997
|
Dataset
|
01/12/2016 00:33:31
|
02/06/2018
| 2,684,143
| 761,523
| 4,413
| 7,605
| 1
|
11/06/2019
| 19
|
The Iris dataset was used in R.A. Fisher's classic 1936 paper, [The Use of Multiple Measurements in Taxonomic Problems](http://rcs.chemometrics.ru/Tutorials/classification/Fisher.pdf), and can also be found on the [UCI Machine Learning Repository][1].
It includes three iris species with 50 samples each as well as some properties about each flower. One flower species is linearly separable from the other two, but the other two are not linearly separable from each other.
The columns in this dataset are:
- Id
- SepalLengthCm
- SepalWidthCm
- PetalLengthCm
- PetalWidthCm
- Species
[](https://www.kaggle.com/benhamner/d/uciml/iris/sepal-width-vs-length)
[1]: http://archive.ics.uci.edu/ml/
|
Iris Species
|
Classify iris plants into three species in this classic dataset
|
CC0: Public Domain
| 2
| 2
| null | 0
| 0.222027
| 0.080587
| 0.765251
| 1
| 0.516966
| 0.416712
| 2
|
24
| 228
| 1
| null | 7
| 482
| 482
| 1,652
|
Dataset
|
10/06/2016 18:31:56
|
02/06/2018
| 3,048,278
| 700,575
| 4,748
| 3,681
| 1
|
11/06/2019
| 228
|
## Context
This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to diagnostically predict whether or not a patient has diabetes, based on certain diagnostic measurements included in the dataset. Several constraints were placed on the selection of these instances from a larger database. In particular, all patients here are females at least 21 years old of Pima Indian heritage.
## Content
The datasets consists of several medical predictor variables and one target variable, `Outcome`. Predictor variables includes the number of pregnancies the patient has had, their BMI, insulin level, age, and so on.
## Acknowledgements
Smith, J.W., Everhart, J.E., Dickson, W.C., Knowler, W.C., & Johannes, R.S. (1988). [Using the ADAP learning algorithm to forecast the onset of diabetes mellitus][1]. *In Proceedings of the Symposium on Computer Applications and Medical Care* (pp. 261--265). IEEE Computer Society Press.
## Inspiration
Can you build a machine learning model to accurately predict whether or not the patients in the dataset have diabetes or not?
[1]: http://rexa.info/paper/04587c10a7c92baa01948f71f2513d5928fe8e81
|
Pima Indians Diabetes Database
|
Predict the onset of diabetes based on diagnostic measures
|
CC0: Public Domain
| 1
| 1
|
dentistry, drugs and medications, hospitals and treatment centers
| 3
| 0.252148
| 0.086704
| 0.704004
| 0.484024
| 0.38172
| 0.323329
| 3
|
19,307
| 434,238
| 1,571,785
| 1,571,785
| null | 2,654,038
| 2,698,094
| 446,914
|
Dataset
|
12/04/2019 05:57:54
|
12/04/2019
| 3,613,758
| 655,791
| 9,348
| 2,234
| 1
|
01/17/2020
| 434,238
|
### Other Platform's Datasets (Click on the logos to view)
>
[![alt text][1]][2] [![alt text][3]][4] [![alt text][5]][6] [![alt text][7]][8]
[1]: https://i.imgur.com/As0PMcL.jpg =75x20
[2]: https://www.kaggle.com/shivamb/netflix-shows
[3]: https://i.imgur.com/r5t3MpQ.jpg =75x20
[4]: https://www.kaggle.com/shivamb/amazon-prime-movies-and-tv-shows
[5]: https://i.imgur.com/4a4ZMuy.png =75x30
[6]: https://www.kaggle.com/shivamb/disney-movies-and-tv-shows
[7]: https://i.imgur.com/nCL8Skc.png?1 =75x32
[8]: https://www.kaggle.com/shivamb/hulu-movies-and-tv-shows
- [Amazon Prime Video Movies and TV Shows](https://www.kaggle.com/shivamb/amazon-prime-movies-and-tv-shows)
- [Disney+ Movies and TV Shows](https://www.kaggle.com/shivamb/disney-movies-and-tv-shows)
- [Netflix Prime Video Movies and TV Shows](https://www.kaggle.com/shivamb/netflix-shows)
- [Hulu Movies and TV Shows](https://www.kaggle.com/shivamb/hulu-movies-and-tv-shows)
### Netflix Movies and TV Shows
> **About this Dataset:** *[Netflix](https://en.wikipedia.org/wiki/Netflix) is one of the most popular media and video streaming platforms. They have over 8000 movies or tv shows available on their platform, as of mid-2021, they have over 200M Subscribers globally. This tabular dataset consists of listings of all the movies and tv shows available on Netflix, along with details such as - cast, directors, ratings, release year, duration, etc.*
Featured Notebooks: [Click Here to View Featured Notebooks](https://www.kaggle.com/shivamb/netflix-shows/discussion/279376)
Milestone: Oct 18th, 2021: [Most Upvoted Dataset on Kaggle by an Individual Contributor](https://www.kaggle.com/shivamb/netflix-shows/discussion/279377)
### Interesting Task Ideas
> 1. Understanding what content is available in different countries
> 2. Identifying similar content by matching text-based features
> 3. Network analysis of Actors / Directors and find interesting insights
> 4. Does Netflix has more focus on TV Shows than movies in recent years.
[Check my Other Datasets](https://www.kaggle.com/shivamb/datasets)
|
Netflix Movies and TV Shows
|
Listings of movies and tv shows on Netflix - Regularly Updated
|
CC0: Public Domain
| 5
| 5
| null | 0
| 0.298923
| 0.170705
| 0.659001
| 0.293754
| 0.355596
| 0.304241
| 4
|
5,818
| 1,442
| 519,516
| 519,516
| null | 8,172
| 8,172
| 4,272
|
Dataset
|
06/21/2017 21:36:28
|
02/06/2018
| 1,439,580
| 337,021
| 3,784
| 6,476
| 1
|
11/06/2019
| 1,442
|
### Context
After watching [Somm](http://www.imdb.com/title/tt2204371/) (a documentary on master sommeliers) I wondered how I could create a predictive model to identify wines through blind tasting like a master sommelier would. The first step in this journey was gathering some data to train a model. I plan to use deep learning to predict the wine variety using words in the description/review. The model still won't be able to taste the wine, but theoretically it could identify the wine based on a description that a sommelier could give. If anyone has any ideas on how to accomplish this, please post them!
### Content
This dataset contains three files:
- **winemag-data-130k-v2.csv** contains 10 columns and 130k rows of wine reviews.
- **winemag-data_first150k.csv** contains 10 columns and 150k rows of wine reviews.
- **winemag-data-130k-v2.json** contains 6919 nodes of wine reviews.
Click on the data tab to see individual file descriptions, column-level metadata and summary statistics.
### Acknowledgements
The data was scraped from [WineEnthusiast](http://www.winemag.com/?s=&drink_type=wine) during the week of June 15th, 2017. The code for the scraper can be found [here](https://github.com/zackthoutt/wine-deep-learning) if you have any more specific questions about data collection that I didn't address.
**UPDATE 11/24/2017**
After feedback from users of the dataset I scraped the reviews again on November 22nd, 2017. This time around I collected the title of each review, which you can parse the year out of, the tasters name, and the taster's Twitter handle. This should also fix the duplicate entry issue.
### Inspiration
I think that this dataset offers some great opportunities for sentiment analysis and other text related predictive models. My overall goal is to create a model that can identify the variety, winery, and location of a wine based on a description. If anyone has any ideas, breakthroughs, or other interesting insights/models please post them.
|
Wine Reviews
|
130k wine reviews with variety, location, winery, price, and description
|
CC BY-NC-SA 4.0
| 4
| 4
| null | 0
| 0.119079
| 0.0691
| 0.338671
| 0.851545
| 0.344599
| 0.296096
| 5
|
15,411
| 17,810
| 1,314,380
| 1,314,380
| null | 23,812
| 23,851
| 25,540
|
Dataset
|
03/22/2018 05:42:41
|
03/22/2018
| 2,863,109
| 547,127
| 7,159
| 3,303
| 1
|
11/06/2019
| 17,810
|
### Context
http://www.cell.com/cell/fulltext/S0092-8674(18)30154-5

Figure S6. Illustrative Examples of Chest X-Rays in Patients with Pneumonia, Related to Figure 6
The normal chest X-ray (left panel) depicts clear lungs without any areas of abnormal opacification in the image. Bacterial pneumonia (middle) typically exhibits a focal lobar consolidation, in this case in the right upper lobe (white arrows), whereas viral pneumonia (right) manifests with a more diffuse โโinterstitialโโ pattern in both lungs.
http://www.cell.com/cell/fulltext/S0092-8674(18)30154-5
### Content
The dataset is organized into 3 folders (train, test, val) and contains subfolders for each image category (Pneumonia/Normal). There are 5,863 X-Ray images (JPEG) and 2 categories (Pneumonia/Normal).
Chest X-ray images (anterior-posterior) were selected from retrospective cohorts of pediatric patients of one to five years old from Guangzhou Women and Childrenโs Medical Center, Guangzhou. All chest X-ray imaging was performed as part of patientsโ routine clinical care.
For the analysis of chest x-ray images, all chest radiographs were initially screened for quality control by removing all low quality or unreadable scans. The diagnoses for the images were then graded by two expert physicians before being cleared for training the AI system. In order to account for any grading errors, the evaluation set was also checked by a third expert.
### Acknowledgements
Data: https://data.mendeley.com/datasets/rscbjbr9sj/2
License: [CC BY 4.0][1]
Citation: http://www.cell.com/cell/fulltext/S0092-8674(18)30154-5
![enter image description here][2]
### Inspiration
Automated methods to detect and classify human diseases from medical images.
[1]: https://creativecommons.org/licenses/by/4.0/
[2]: https://i.imgur.com/8AUJkin.png
|
Chest X-Ray Images (Pneumonia)
|
5,863 images, 2 categories
|
Other (specified in description)
| 2
| 2
| null | 0
| 0.236831
| 0.130732
| 0.549805
| 0.43432
| 0.337922
| 0.291118
| 6
|
5,111
| 284
| 462,330
| 462,330
| null | 618
| 618
| 1,788
|
Dataset
|
10/26/2016 08:17:30
|
02/06/2018
| 2,563,080
| 726,341
| 6,549
| 1,848
| 1
|
11/06/2019
| 284
|
This dataset contains a list of video games with sales greater than 100,000 copies. It was generated by a scrape of [vgchartz.com][1].
Fields include
* Rank - Ranking of overall sales
* Name - The games name
* Platform - Platform of the games release (i.e. PC,PS4, etc.)
* Year - Year of the game's release
* Genre - Genre of the game
* Publisher - Publisher of the game
* NA_Sales - Sales in North America (in millions)
* EU_Sales - Sales in Europe (in millions)
* JP_Sales - Sales in Japan (in millions)
* Other_Sales - Sales in the rest of the world (in millions)
* Global_Sales - Total worldwide sales.
The script to scrape the data is available at https://github.com/GregorUT/vgchartzScrape.
It is based on BeautifulSoup using Python.
There are 16,598 records. 2 records were dropped due to incomplete information.
[1]: http://www.vgchartz.com/
|
Video Game Sales
|
Analyze sales data from more than 16,500 games.
|
Unknown
| 2
| 2
| null | 0
| 0.212013
| 0.119592
| 0.729896
| 0.242998
| 0.326125
| 0.282261
| 7
|
7,521
| 180
| 711,301
| null | 7
| 408
| 408
| 1,547
|
Dataset
|
09/19/2016 20:27:05
|
02/06/2018
| 2,398,741
| 484,259
| 3,892
| 3,501
| 1
|
11/06/2019
| 180
|
Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image.
n the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].
This database is also available through the UW CS ftp server:
ftp ftp.cs.wisc.edu
cd math-prog/cpo-dataset/machine-learn/WDBC/
Also can be found on UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29
Attribute Information:
1) ID number
2) Diagnosis (M = malignant, B = benign)
3-32)
Ten real-valued features are computed for each cell nucleus:
a) radius (mean of distances from center to points on the perimeter)
b) texture (standard deviation of gray-scale values)
c) perimeter
d) area
e) smoothness (local variation in radius lengths)
f) compactness (perimeter^2 / area - 1.0)
g) concavity (severity of concave portions of the contour)
h) concave points (number of concave portions of the contour)
i) symmetry
j) fractal dimension ("coastline approximation" - 1)
The mean, standard error and "worst" or largest (mean of the three
largest values) of these features were computed for each image,
resulting in 30 features. For instance, field 3 is Mean Radius, field
13 is Radius SE, field 23 is Worst Radius.
All feature values are recoded with four significant digits.
Missing attribute values: none
Class distribution: 357 benign, 212 malignant
|
Breast Cancer Wisconsin (Diagnostic) Data Set
|
Predict whether the cancer is benign or malignant
|
CC BY-NC-SA 4.0
| 2
| 2
| null | 0
| 0.198419
| 0.071072
| 0.486629
| 0.460355
| 0.304119
| 0.265528
| 8
|
14,353
| 4,549
| 1,236,717
| 1,236,717
| null | 466,349
| 482,208
| 10,301
|
Dataset
|
11/13/2017 18:30:07
|
02/06/2018
| 2,095,869
| 285,151
| 5,778
| 4,819
| 1
|
11/06/2019
| 4,549
|
UPDATE: Source code used for collecting this data [released here](https://github.com/DataSnaek/Trending-YouTube-Scraper)
### Context
YouTube (the world-famous video sharing website) maintains a list of the [top trending videos](https://www.youtube.com/feed/trending) on the platform. [According to Variety magazine](http://variety.com/2017/digital/news/youtube-2017-top-trending-videos-music-videos-1202631416/), โTo determine the yearโs top-trending videos, YouTube uses a combination of factors including measuring users interactions (number of views, shares, comments and likes). Note that theyโre not the most-viewed videos overall for the calendar yearโ. Top performers on the YouTube trending list are music videos (such as the famously virile โGangam Styleโ), celebrity and/or reality TV performances, and the random dude-with-a-camera viral videos that YouTube is well-known for.
This dataset is a daily record of the top trending YouTube videos.
Note that this dataset is a structurally improved version of [this dataset](https://www.kaggle.com/datasnaek/youtube).
### Content
This dataset includes several months (and counting) of data on daily trending YouTube videos. Data is included for the US, GB, DE, CA, and FR regions (USA, Great Britain, Germany, Canada, and France, respectively), with up to 200 listed trending videos per day.
EDIT: Now includes data from RU, MX, KR, JP and IN regions (Russia, Mexico, South Korea, Japan and India respectively) over the same time period.
Each regionโs data is in a separate file. Data includes the video title, channel title, publish time, tags, views, likes and dislikes, description, and comment count.
The data also includes a `category_id` field, which varies between regions. To retrieve the categories for a specific video, find it in the associated `JSON`. One such file is included for each of the five regions in the dataset.
For more information on specific columns in the dataset refer to the [column metadata](https://www.kaggle.com/datasnaek/youtube-new/data).
### Acknowledgements
This dataset was collected using the YouTube API.
### Inspiration
Possible uses for this dataset could include:
* Sentiment analysis in a variety of forms
* Categorising YouTube videos based on their comments and statistics.
* Training ML algorithms like RNNs to generate their own YouTube comments.
* Analysing what factors affect how popular a YouTube video will be.
* Statistical analysis over time.
For further inspiration, see the kernels on this dataset!
|
Trending YouTube Video Statistics
|
Daily statistics for trending YouTube videos
|
CC0: Public Domain
| 115
| 115
| null | 0
| 0.173366
| 0.105513
| 0.286547
| 0.633662
| 0.299772
| 0.262189
| 9
|
22,700
| 661,950
| 1,772,071
| 1,772,071
| null | 2,314,697
| 2,356,116
| 676,378
|
Dataset
|
05/18/2020 22:50:26
|
05/18/2020
| 960,872
| 80,622
| 54,761
| 106
| 1
|
05/24/2021
| 661,950
|
### Context
This dataset comes from this [spreadsheet](https://tinyurl.com/acnh-sheet), a comprehensive Item Catalog for Animal Crossing New Horizons (ACNH). As described by [Wikipedia](https://en.wikipedia.org/wiki/Animal_Crossing:_New_Horizons),
> ACNH is a life simulation game released by Nintendo for Nintendo Switch on March 20, 2020. It is the fifth main series title in the Animal Crossing series and, with 5 million digital copies sold, has broken the record for Switch title with most digital units sold in a single month. In New Horizons, the player assumes the role of a customizable character who moves to a deserted island. Taking place in real-time, the player can explore the island in a nonlinear fashion, gathering and crafting items, catching insects and fish, and developing the island into a community of anthropomorphic animals.
### Content
There are 30 csvs each listing various items, villagers, clothing, and other collectibles from the game. The data was collected by a dedicated group of AC fans who continue to collaborate and build this [spreadsheet](https://tinyurl.com/acnh-sheet) for public use. The database contains the original data and full list of contributors and raw data. At the time of writing, the only difference between the spreadsheet and this version is that the Kaggle version omits all columns with images of the items, but is otherwise identical.
### Acknowledgements
Thanks to every contributor listed on the [spreadsheet!](https://tinyurl.com/acnh-sheet) Please attribute this spreadsheet and group for any use of the data. They also have a Discord server linked in the spreadsheet in case you want to contact them.
|
Animal Crossing New Horizons Catalog
|
A comprehensive inventory of ACNH items, villagers, clothing, fish/bugs etc
|
CC0: Public Domain
| 3
| 3
| null | 0
| 0.079482
| 1
| 0.081017
| 0.013938
| 0.293609
| 0.257436
| 10
|
616
| 2,709
| 9,028
| 9,028
| null | 38,454
| 40,228
| 7,022
|
Dataset
|
09/27/2017 16:56:09
|
02/06/2018
| 578,396
| 193,627
| 1,679
| 6,845
| 1
|
08/19/2020
| 2,709
|
### Context
Melbourne real estate is BOOMING. Can you find the insight or predict the next big trend to become a real estate mogul... or even harder, to snap up a reasonably priced 2-bedroom unit?
### Content
This is a snapshot of a [dataset created by Tony Pino][1].
It was scraped from publicly available results posted every week from Domain.com.au. He cleaned it well, and now it's up to you to make data analysis magic. The dataset includes Address, Type of Real estate, Suburb, Method of Selling, Rooms, Price, Real Estate Agent, Date of Sale and distance from C.B.D.
### Notes on Specific Variables
Rooms: Number of rooms
Price: Price in dollars
Method: S - property sold; SP - property sold prior; PI - property passed in; PN - sold prior not disclosed; SN - sold not disclosed; NB - no bid; VB - vendor bid; W - withdrawn prior to auction; SA - sold after auction; SS - sold after auction price not disclosed. N/A - price or highest bid not available.
Type: br - bedroom(s); h - house,cottage,villa, semi,terrace; u - unit, duplex; t - townhouse; dev site - development site; o res - other residential.
SellerG: Real Estate Agent
Date: Date sold
Distance: Distance from CBD
Regionname: General Region (West, North West, North, North east ...etc)
Propertycount: Number of properties that exist in the suburb.
Bedroom2 : Scraped # of Bedrooms (from different source)
Bathroom: Number of Bathrooms
Car: Number of carspots
Landsize: Land Size
BuildingArea: Building Size
CouncilArea: Governing council for the area
### Acknowledgements
This is intended as a static (unchanging) snapshot of https://www.kaggle.com/anthonypino/melbourne-housing-market. It was created in September 2017. Additionally, homes with no Price have been removed.
[1]: https://www.kaggle.com/anthonypino/melbourne-housing-market
|
Melbourne Housing Snapshot
|
Snapshot of Tony Pino's Melbourne Housing Dataset
|
CC BY-NC-SA 4.0
| 5
| 5
| null | 0
| 0.047844
| 0.030661
| 0.194575
| 0.900066
| 0.293286
| 0.257186
| 11
|
36,092
| 551,982
| 2,931,338
| null | 3,737
| 3,756,201
| 3,810,704
| 565,591
|
Dataset
|
03/12/2020 20:05:08
|
03/12/2020
| 4,757,694
| 194,642
| 11,219
| 1,765
| 1
|
03/16/2020
| 551,982
|
### Dataset Description
In response to the COVID-19 pandemic, the White House and a coalition of leading research groups have prepared the COVID-19 Open Research Dataset (CORD-19). CORD-19 is a resource of over 1,000,000 scholarly articles, including over 400,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses. This freely available dataset is provided to the global research community to apply recent advances in natural language processing and other AI techniques to generate new insights in support of the ongoing fight against this infectious disease. There is a growing urgency for these approaches because of the rapid acceleration in new coronavirus literature, making it difficult for the medical research community to keep up.
### Call to Action
We are issuing a call to action to the world's artificial intelligence experts to develop text and data mining tools that can help the medical community develop answers to high priority scientific questions. The CORD-19 dataset represents the most extensive machine-readable coronavirus literature collection available for data mining to date. This allows the worldwide AI research community the opportunity to apply text and data mining approaches to find answers to questions within, and connect insights across, this content in support of the ongoing COVID-19 response efforts worldwide. There is a growing urgency for these approaches because of the rapid increase in coronavirus literature, making it difficult for the medical community to keep up.
A list of our initial key questions can be found under the **[Tasks](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge/tasks)** section of this dataset. These key scientific questions are drawn from the NASEMโs SCIED (National Academies of Sciences, Engineering, and Medicineโs Standing Committee on Emerging Infectious Diseases and 21st Century Health Threats) [research topics](https://www.nationalacademies.org/event/03-11-2020/standing-committee-on-emerging-infectious-diseases-and-21st-century-health-threats-virtual-meeting-1) and the World Health Organizationโs [R&D Blueprint](https://www.who.int/blueprint/priority-diseases/key-action/Global_Research_Forum_FINAL_VERSION_for_web_14_feb_2020.pdf?ua=1) for COVID-19.
Many of these questions are suitable for text mining, and we encourage researchers to develop text mining tools to provide insights on these questions.
We are maintaining a summary of the [community's contributions](https://www.kaggle.com/covid-19-contributions). For guidance on how to make your contributions useful, we're maintaining a [forum thread](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge/discussion/138484) with the feedback we're getting from the medical and health policy communities.
### Prizes
Kaggle is sponsoring a *$1,000 per task* award to the winner whose submission is identified as best meeting the evaluation criteria. The winner may elect to receive this award as a charitable donation to COVID-19 relief/research efforts or as a monetary payment. More details on the prizes and timeline can be found on the [discussion post](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge/discussion/135826).
### Accessing the Dataset
We have made this dataset available on Kaggle. Watch out for [periodic updates](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge/discussion/137474).
The dataset is also hosted on [AI2's Semantic Scholar](https://pages.semanticscholar.org/coronavirus-research). And you can search the dataset using AI2's new [COVID-19 explorer](https://cord-19.apps.allenai.org/).
The licenses for each dataset can be found in the all _ sources _ metadata csv file.
### Acknowledgements

This dataset was created by the Allen Institute for AI in partnership with the Chan Zuckerberg Initiative, Georgetown Universityโs Center for Security and Emerging Technology, Microsoft Research, IBM, and the National Library of Medicine - National Institutes of Health, in coordination with The White House Office of Science and Technology Policy.
|
COVID-19 Open Research Dataset Challenge (CORD-19)
|
An AI challenge with AI2, CZI, MSR, Georgetown, NIH & The White House
|
Other (specified in description)
| 111
| 104
| null | 0
| 0.393548
| 0.204872
| 0.195595
| 0.232084
| 0.256525
| 0.22835
| 12
|
2,063
| 494,724
| 71,388
| 71,388
| null | 2,364,896
| 2,406,681
| 507,816
|
Dataset
|
01/30/2020 14:18:33
|
01/30/2020
| 2,553,840
| 468,447
| 6,282
| 1,742
| 1
|
02/02/2020
| 494,724
|
### Context
From [World Health Organization](https://www.who.int/emergencies/diseases/novel-coronavirus-2019) - On 31 December 2019, WHO was alerted to several cases of pneumonia in Wuhan City, Hubei Province of China. The virus did not match any other known virus. This raised concern because when a virus is new, we do not know how it affects people.
So daily level information on the affected people can give some interesting insights when it is made available to the broader data science community.
[Johns Hopkins University has made an excellent dashboard](https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6) using the affected cases data. Data is extracted from the google sheets associated and made available here.
Edited:
Now data is available as csv files in the [Johns Hopkins Github repository](https://github.com/CSSEGISandData/COVID-19). Please refer to the github repository for the [Terms of Use](https://github.com/CSSEGISandData/COVID-19/blob/master/README.md) details. Uploading it here for using it in Kaggle kernels and getting insights from the broader DS community.
### Content
2019 Novel Coronavirus (2019-nCoV) is a virus (more specifically, a coronavirus) identified as the cause of an outbreak of respiratory illness first detected in Wuhan, China. Early on, many of the patients in the outbreak in Wuhan, China reportedly had some link to a large seafood and animal market, suggesting animal-to-person spread. However, a growing number of patients reportedly have not had exposure to animal markets, indicating person-to-person spread is occurring. At this time, itโs unclear how easily or sustainably this virus is spreading between people - [CDC](https://www.cdc.gov/coronavirus/2019-ncov/about/index.html)
This dataset has daily level information on the number of affected cases, deaths and recovery from 2019 novel coronavirus. Please note that this is a time series data and so the number of cases on any given day is the cumulative number.
The data is available from 22 Jan, 2020.
### Column Description
Main file in this dataset is `covid_19_data.csv` and the detailed descriptions are below.
`covid_19_data.csv`
* Sno - Serial number
* ObservationDate - Date of the observation in MM/DD/YYYY
* Province/State - Province or state of the observation (Could be empty when missing)
* Country/Region - Country of observation
* Last Update - Time in UTC at which the row is updated for the given province or country. (Not standardised and so please clean before using it)
* Confirmed - Cumulative number of confirmed cases till that date
* Deaths - Cumulative number of of deaths till that date
* Recovered - Cumulative number of recovered cases till that date
`2019_ncov_data.csv`
This is older file and is not being updated now. Please use the `covid_19_data.csv` file
**Added two new files with individual level information**
`COVID_open_line_list_data.csv`
This file is obtained from [this link](https://docs.google.com/spreadsheets/d/1itaohdPiAeniCXNlntNztZ_oRvjh0HsGuJXUJWET008/edit#gid=0)
`COVID19_line_list_data.csv`
This files is obtained from [this link](https://docs.google.com/spreadsheets/d/e/2PACX-1vQU0SIALScXx8VXDX7yKNKWWPKE1YjFlWc6VTEVSN45CklWWf-uWmprQIyLtoPDA18tX9cFDr-aQ9S6/pubhtml)
**Country level datasets**
If you are interested in knowing country level data, please refer to the following Kaggle datasets:
**India** - https://www.kaggle.com/sudalairajkumar/covid19-in-india
**South Korea** - https://www.kaggle.com/kimjihoo/coronavirusdataset
**Italy** - https://www.kaggle.com/sudalairajkumar/covid19-in-italy
**Brazil** - https://www.kaggle.com/unanimad/corona-virus-brazil
**USA** - https://www.kaggle.com/sudalairajkumar/covid19-in-usa
**Switzerland** - https://www.kaggle.com/daenuprobst/covid19-cases-switzerland
**Indonesia** - https://www.kaggle.com/ardisragen/indonesia-coronavirus-cases
### Acknowledgements
* [Johns Hopkins University](https://github.com/CSSEGISandData/COVID-19) for making the data available for educational and academic research purposes
* MoBS lab - https://www.mobs-lab.org/2019ncov.html
* World Health Organization (WHO): https://www.who.int/
* DXY.cn. Pneumonia. 2020. http://3g.dxy.cn/newh5/view/pneumonia.
* BNO News: https://bnonews.com/index.php/2020/02/the-latest-coronavirus-cases/
* National Health Commission of the Peopleโs Republic of China (NHC):
http://www.nhc.gov.cn/xcs/yqtb/list_gzbd.shtml
* China CDC (CCDC): http://weekly.chinacdc.cn/news/TrackingtheEpidemic.htm
* Hong Kong Department of Health: https://www.chp.gov.hk/en/features/102465.html
* Macau Government: https://www.ssm.gov.mo/portal/
* Taiwan CDC: https://sites.google.com/cdc.gov.tw/2019ncov/taiwan?authuser=0
* US CDC: https://www.cdc.gov/coronavirus/2019-ncov/index.html
* Government of Canada: https://www.canada.ca/en/public-health/services/diseases/coronavirus.html
* Australia Government Department of Health: https://www.health.gov.au/news/coronavirus-update-at-a-glance
* European Centre for Disease Prevention and Control (ECDC): https://www.ecdc.europa.eu/en/geographical-distribution-2019-ncov-cases
* Ministry of Health Singapore (MOH): https://www.moh.gov.sg/covid-19
* Italy Ministry of Health: http://www.salute.gov.it/nuovocoronavirus
Picture courtesy : [Johns Hopkins University dashboard](https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6)
### Inspiration
Some insights could be
1. Changes in number of affected cases over time
2. Change in cases over time at country level
3. Latest number of affected cases
|
Novel Corona Virus 2019 Dataset
|
Day level information on covid-19 affected cases
|
Data files ยฉ Original Authors
| 151
| 151
| null | 0
| 0.211249
| 0.114717
| 0.47074
| 0.22906
| 0.256441
| 0.228283
| 13
|
3,049
| 138
| 1,132,983
| null | 1,030
| 4,508
| 4,508
| 1,471
|
Dataset
|
08/30/2016 03:36:42
|
02/06/2018
| 2,086,741
| 470,707
| 4,140
| 2,289
| 1
|
11/06/2019
| 138
|
### Background
What can we say about the success of a movie before it is released? Are there certain companies (Pixar?) that have found a consistent formula? Given that major films costing over $100 million to produce can still flop, this question is more important than ever to the industry. Film aficionados might have different interests. Can we predict which films will be highly rated, whether or not they are a commercial success?
This is a great place to start digging in to those questions, with data on the plot, cast, crew, budget, and revenues of several thousand films.
### Data Source Transfer Summary
We (Kaggle) have removed the original version of this dataset per a [DMCA](https://en.wikipedia.org/wiki/Digital_Millennium_Copyright_Act) takedown request from IMDB. In order to minimize the impact, we're replacing it with a similar set of films and data fields from [The Movie Database (TMDb)](themoviedb.org) in accordance with [their terms of use](https://www.themoviedb.org/documentation/api/terms-of-use). The bad news is that kernels built on the old dataset will most likely no longer work.
The good news is that:
- You can port your existing kernels over with a bit of editing. [This kernel](https://www.kaggle.com/sohier/getting-imdb-kernels-working-with-tmdb-data/) offers functions and examples for doing so. You can also find [a general introduction to the new format here](https://www.kaggle.com/sohier/tmdb-format-introduction).
- The new dataset contains full credits for both the cast and the crew, rather than just the first three actors.
- Actor and actresses are now listed in the order they appear in the credits. It's unclear what ordering the original dataset used; for the movies I spot checked it didn't line up with either the credits order or IMDB's stars order.
- The revenues appear to be more current. For example, IMDB's figures for Avatar seem to be from 2010 and understate the film's global revenues by over $2 billion.
- Some of the movies that we weren't able to port over (a couple of hundred) were just bad entries. For example, [this IMDB entry](http://www.imdb.com/title/tt5289954/?ref_=fn_t...) has basically no accurate information at all. It lists Star Wars Episode VII as a documentary.
### Data Source Transfer Details
- Several of the new columns contain json. You can save a bit of time by porting the load data functions [from this kernel]().
- Even in simple fields like runtime may not be consistent across versions. For example, previous dataset shows the duration for Avatar's extended cut while TMDB shows the time for the original version.
- There's now a separate file containing the full credits for both the cast and crew.
- All fields are filled out by users so don't expect them to agree on keywords, genres, ratings, or the like.
- Your existing kernels will continue to render normally until they are re-run.
- If you are curious about how this dataset was prepared, the code to access TMDb's API is posted [here](https://gist.github.com/SohierDane/4a84cb96d220fc4791f52562be37968b).
New columns:
- homepage
- id
- original_title
- overview
- popularity
- production_companies
- production_countries
- release_date
- spoken_languages
- status
- tagline
- vote_average
Lost columns:
- actor_1_facebook_likes
- actor_2_facebook_likes
- actor_3_facebook_likes
- aspect_ratio
- cast_total_facebook_likes
- color
- content_rating
- director_facebook_likes
- facenumber_in_poster
- movie_facebook_likes
- movie_imdb_link
- num_critic_for_reviews
- num_user_for_reviews
### Open Questions About the Data
There are some things we haven't had a chance to confirm about the new dataset. If you have any insights, please let us know in the forums!
- Are the budgets and revenues all in US dollars? Do they consistently show the global revenues?
- This dataset hasn't yet gone through a data quality analysis. Can you find any obvious corrections? For example, in the IMDb version it was necessary to treat values of zero in the budget field as missing. Similar findings would be very helpful to your fellow Kagglers! (It's probably a good idea to keep treating zeros as missing, with the caveat that missing budgets much more likely to have been from small budget films in the first place).
### Inspiration
- Can you categorize the films by type, such as animated or not? We don't have explicit labels for this, but it should be possible to build them from the crew's job titles.
- How sharp is the divide between major film studios and the independents? Do those two groups fall naturally out of a clustering analysis or is something more complicated going on?
### Acknowledgements
This dataset was generated from [The Movie Database](themoviedb.org) API. This product uses the TMDb API but is not endorsed or certified by TMDb.
Their API also provides access to data on many additional movies, actors and actresses, crew members, and TV shows. You can [try it for yourself here](https://www.themoviedb.org/documentation/api).

|
TMDB 5000 Movie Dataset
|
Metadata on ~5,000 movies from TMDb
|
Other (specified in description)
| 2
| 1
| null | 0
| 0.172611
| 0.075601
| 0.473011
| 0.300986
| 0.255552
| 0.227576
| 14
|
10,236
| 11,167
| 907,764
| 907,764
| null | 15,520
| 15,520
| 18,557
|
Dataset
|
01/28/2018 08:44:24
|
02/06/2018
| 1,128,892
| 236,099
| 2,404
| 4,747
| 1
|
09/09/2020
| 11,167
|
### Context
Bob has started his own mobile company. He wants to give tough fight to big companies like Apple,Samsung etc.
He does not know how to estimate price of mobiles his company creates. In this competitive mobile phone market you cannot simply assume things. To solve this problem he collects sales data of mobile phones of various companies.
Bob wants to find out some relation between features of a mobile phone(eg:- RAM,Internal Memory etc) and its selling price. But he is not so good at Machine Learning. So he needs your help to solve this problem.
In this problem you do not have to predict actual price but a price range indicating how high the price is
|
Mobile Price Classification
|
Classify Mobile Price Range
|
Unknown
| 1
| 1
| null | 0
| 0.09338
| 0.0439
| 0.237255
| 0.624195
| 0.249682
| 0.222889
| 15
|
19,332
| 13,996
| 1,574,575
| 1,574,575
| null | 18,858
| 18,858
| 21,551
|
Dataset
|
02/23/2018 18:20:00
|
02/23/2018
| 2,529,709
| 426,389
| 3,217
| 2,056
| 1
|
11/06/2019
| 13,996
|
### Context
"Predict behavior to retain customers. You can analyze all relevant customer data and develop focused customer retention programs." [IBM Sample Data Sets]
### Content
Each row represents a customer, each column contains customerโs attributes described on the column Metadata.
**The data set includes information about:**
+ Customers who left within the last month โ the column is called Churn
+ Services that each customer has signed up for โ phone, multiple lines, internet, online security, online backup, device protection, tech support, and streaming TV and movies
+ Customer account information โ how long theyโve been a customer, contract, payment method, paperless billing, monthly charges, and total charges
+ Demographic info about customers โ gender, age range, and if they have partners and dependents
### Inspiration
To explore this type of models and learn more about the subject.
**New version from IBM:**
https://community.ibm.com/community/user/businessanalytics/blogs/steven-macko/2019/07/11/telco-customer-churn-1113
|
Telco Customer Churn
|
Focused customer retention programs
|
Data files ยฉ Original Authors
| 1
| 1
| null | 0
| 0.209253
| 0.058746
| 0.428476
| 0.270348
| 0.241706
| 0.216486
| 16
|
64,098
| 1,041,311
| 761,104
| 761,104
| null | 7,746,251
| 7,846,685
| 1,058,264
|
Dataset
|
12/16/2020 15:20:03
|
12/16/2020
| 910,143
| 183,855
| 2,567
| 5,005
| 1
|
08/25/2021
| 1,041,311
|
## Content
This dataset generated by respondents to a distributed survey via Amazon Mechanical Turk between 03.12.2016-05.12.2016. Thirty eligible Fitbit users consented to the submission of personal tracker data, including minute-level output for physical activity, heart rate, and sleep monitoring. Individual reports can be parsed by export session ID (column A) or timestamp (column B). Variation between output represents use of different types of Fitbit trackers and individual tracking behaviors / preferences.
#

#
### Starter Kernel(s)
- Julen Aranguren: https://www.kaggle.com/julenaranguren/bellabeat-case-study
- Anastasiia Chebotina: https://www.kaggle.com/chebotinaa/bellabeat-case-study-with-r
#
### Inspiration
- Human temporal routine behavioral analysis and pattern recognition
### Acknowlegement
Furberg, Robert; Brinton, Julia; Keating, Michael ; Ortiz, Alexa
https://zenodo.org/record/53894#.YMoUpnVKiP9
### Study
- [Bellabeat Case Study](https://medium.com/@somchukwumankwocha/bellabeat-case-study-c18835475563)
- [Machine Learning for Fatigue Detection using Fitbit Fitness Trackers](https://sintef.brage.unit.no/sintef-xmlui/bitstream/handle/11250/3055538/Machine_learning_for_fatigue_detection%2B%25288%2529.pdf?sequence=1&isAllowed=y)
- [How I analyzed the data from my FitBit to improve my overall health](https://www.freecodecamp.org/news/how-i-analyzed-the-data-from-my-fitbit-to-improve-my-overall-health-a2e36426d8f9/)
- [Fitbit Fitness Tracker Data Analysis](https://fromdatatostory.com/portfolio/fitbitanalysis/)
- [Evaluating my fitness by analyzing my Fitbit data archive](https://towardsdatascience.com/evaluating-my-fitness-by-analyzing-my-fitbit-data-archive-23a123baf349)
|
FitBit Fitness Tracker Data
|
Pattern recognition with tracker data: : Improve Your Overall Health
|
CC0: Public Domain
| 2
| 2
| null | 0
| 0.075285
| 0.046876
| 0.184755
| 0.65812
| 0.241259
| 0.216126
| 17
|
27,138
| 74,977
| 2,094,163
| 2,094,163
| null | 169,835
| 180,443
| 84,238
|
Dataset
|
11/09/2018 18:25:25
|
11/09/2018
| 2,091,500
| 405,140
| 4,804
| 1,505
| 1
|
11/06/2019
| 74,977
|
### Context
Marks secured by the students
### Content
This data set consists of the marks secured by the students in various subjects.
### Acknowledgements
http://roycekimmons.com/tools/generated_data/exams
### Inspiration
To understand the influence of the parents background, test preparation etc on students performance
|
Students Performance in Exams
|
Marks secured by the students in various subjects
|
Unknown
| 1
| 1
| null | 0
| 0.173005
| 0.087727
| 0.407123
| 0.197896
| 0.216438
| 0.195927
| 18
|
5,451
| 2,243
| 484,516
| null | 952
| 9,243
| 9,243
| 6,101
|
Dataset
|
08/28/2017 20:58:16
|
02/06/2018
| 1,517,576
| 256,517
| 3,046
| 2,945
| 1
|
11/06/2019
| 2,243
|
### Context
Fashion-MNIST is a dataset of Zalando's article imagesโconsisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. Zalando intends Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.
The original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. "If it doesn't work on MNIST, it won't work at all", they said. "Well, if it does work on MNIST, it may still fail on others."
Zalando seeks to replace the original MNIST dataset
### Content
Each image is 28 pixels in height and 28 pixels in width, for a total of 784 pixels in total. Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker. This pixel-value is an integer between 0 and 255. The training and test data sets have 785 columns. The first column consists of the class labels (see above), and represents the article of clothing. The rest of the columns contain the pixel-values of the associated image.
- To locate a pixel on the image, suppose that we have decomposed x as x = i * 28 + j, where i and j are integers between 0 and 27. The pixel is located on row i and column j of a 28 x 28 matrix.
- For example, pixel31 indicates the pixel that is in the fourth column from the left, and the second row from the top, as in the ascii-diagram below.
<br><br>
**Labels**
Each training and test example is assigned to one of the following labels:
- 0 T-shirt/top
- 1 Trouser
- 2 Pullover
- 3 Dress
- 4 Coat
- 5 Sandal
- 6 Shirt
- 7 Sneaker
- 8 Bag
- 9 Ankle boot
<br><br>
TL;DR
- Each row is a separate image
- Column 1 is the class label.
- Remaining columns are pixel numbers (784 total).
- Each value is the darkness of the pixel (1 to 255)
### Acknowledgements
- Original dataset was downloaded from [https://github.com/zalandoresearch/fashion-mnist][1]
- Dataset was converted to CSV with this script: [https://pjreddie.com/projects/mnist-in-csv/][2]
### License
The MIT License (MIT) Copyright ยฉ [2017] Zalando SE, https://tech.zalando.com
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the โSoftwareโ), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED โAS ISโ, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
[1]: https://github.com/zalandoresearch/fashion-mnist
[2]: https://pjreddie.com/projects/mnist-in-csv/
|
Fashion MNIST
|
An MNIST-like dataset of 70,000 28x28 labeled fashion images
|
Other (specified in description)
| 4
| 4
| null | 0
| 0.125531
| 0.055624
| 0.257773
| 0.387245
| 0.206543
| 0.187759
| 19
|
14,182
| 33,180
| 1,223,413
| 1,223,413
| null | 43,520
| 45,794
| 41,554
|
Dataset
|
06/25/2018 11:33:56
|
06/25/2018
| 1,811,531
| 284,703
| 5,685
| 2,005
| null | null | 33,180
| null | null | null | null | null | 0
| null | 0
| 0.149846
| 0.103815
| 0.286097
| 0.263642
| 0.20085
| 0.18303
| 20
|
19,997
| 13,720
| 1,616,098
| 1,616,098
| null | 18,513
| 18,513
| 21,253
|
Dataset
|
02/21/2018 00:15:14
|
02/21/2018
| 1,784,261
| 348,010
| 2,978
| 1,870
| 1
|
11/06/2019
| 13,720
|
## Context
Machine Learning with R by Brett Lantz is a book that provides an introduction to machine learning using R. As far as I can tell, Packt Publishing does not make its datasets available online unless you buy the book and create a user account which can be a problem if you are checking the book out from the library or borrowing the book from a friend. All of these datasets are in the public domain but simply needed some cleaning up and recoding to match the format in the book.
## Content
**Columns**
- age: age of primary beneficiary
- sex: insurance contractor gender, female, male
- bmi: Body mass index, providing an understanding of body, weights that are relatively high or low relative to height,
objective index of body weight (kg / m ^ 2) using the ratio of height to weight, ideally 18.5 to 24.9
- children: Number of children covered by health insurance / Number of dependents
- smoker: Smoking
- region: the beneficiary's residential area in the US, northeast, southeast, southwest, northwest.
- charges: Individual medical costs billed by health insurance
## Acknowledgements
The dataset is available on GitHub [here](https://github.com/stedy/Machine-Learning-with-R-datasets).
## Inspiration
Can you accurately predict insurance costs?
|
Medical Cost Personal Datasets
|
Insurance Forecast by using Linear Regression
|
Database: Open Database, Contents: Database Contents
| 1
| 1
| null | 0
| 0.147591
| 0.054382
| 0.349713
| 0.245891
| 0.199394
| 0.181817
| 21
|
13,417
| 4,458
| 1,132,983
| null | 7
| 8,204
| 8,204
| 10,170
|
Dataset
|
11/12/2017 14:08:43
|
02/06/2018
| 1,650,374
| 318,880
| 3,134
| 2,025
| 1
|
11/06/2019
| 4,458
|
### Context
The two datasets are related to red and white variants of the Portuguese "Vinho Verde" wine. For more details, consult the reference [Cortez et al., 2009]. Due to privacy and logistic issues, only physicochemical (inputs) and sensory (the output) variables are available (e.g. there is no data about grape types, wine brand, wine selling price, etc.).
These datasets can be viewed as classification or regression tasks. The classes are ordered and not balanced (e.g. there are much more normal wines than excellent or poor ones).
---
*This dataset is also available from the UCI machine learning repository, https://archive.ics.uci.edu/ml/datasets/wine+quality , I just shared it to kaggle for convenience. (If I am mistaken and the public license type disallowed me from doing so, I will take this down if requested.)*
### Content
For more information, read [Cortez et al., 2009].<br>
Input variables (based on physicochemical tests):<br>
1 - fixed acidity <br>
2 - volatile acidity <br>
3 - citric acid <br>
4 - residual sugar <br>
5 - chlorides <br>
6 - free sulfur dioxide <br>
7 - total sulfur dioxide <br>
8 - density <br>
9 - pH <br>
10 - sulphates <br>
11 - alcohol <br>
Output variable (based on sensory data): <br>
12 - quality (score between 0 and 10) <br>
### Tips
What might be an interesting thing to do, is aside from using regression modelling, is to set an arbitrary cutoff for your dependent variable (wine quality) at e.g. 7 or higher getting classified as 'good/1' and the remainder as 'not good/0'.
This allows you to practice with hyper parameter tuning on e.g. decision tree algorithms looking at the ROC curve and the AUC value.
Without doing any kind of feature engineering or overfitting you should be able to get an AUC of .88 (without even using random forest algorithm)
**KNIME** is a great tool (GUI) that can be used for this.<br>
1 - File Reader (for csv) to linear correlation node and to interactive histogram for basic EDA.<br>
2- File Reader to 'Rule Engine Node' to turn the 10 point scale to dichtome variable (good wine and rest), the code to put in the rule engine is something like this:<br>
- **$quality$ > 6.5 => "good"**<br>
- **TRUE => "bad"** <br>
3- Rule Engine Node output to input of Column Filter node to filter out your original 10point feature (this prevent leaking)<br>
4- Column Filter Node output to input of Partitioning Node (your standard train/tes split, e.g. 75%/25%, choose 'random' or 'stratified')<br>
5- Partitioning Node train data split output to input of Train data split to input Decision Tree Learner node and <br>
6- Partitioning Node test data split output to input Decision Tree predictor Node<br>
7- Decision Tree learner Node output to input Decision Tree Node input<br>
8- Decision Tree output to input ROC Node.. (here you can evaluate your model base on AUC value)<br>
### Inspiration
Use machine learning to determine which physiochemical properties make a wine 'good'!
### Acknowledgements
This dataset is also available from the UCI machine learning repository, https://archive.ics.uci.edu/ml/datasets/wine+quality , I just shared it to kaggle for convenience. *(I am mistaken and the public license type disallowed me from doing so, I will take this down at first request. I am not the owner of this dataset.*
**Please include this citation if you plan to use this database:
P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis.
Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553, 2009.**
### Relevant publication
P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis. Modeling wine preferences by data mining from physicochemical properties.
In Decision Support Systems, Elsevier, 47(4):547-553, 2009.
|
Red Wine Quality
|
Simple and clean practice dataset for regression or classification modelling
|
Database: Open Database, Contents: Database Contents
| 2
| 2
| null | 0
| 0.136516
| 0.057231
| 0.320441
| 0.266272
| 0.195115
| 0.178242
| 22
|
10,417
| 3,405
| 927,562
| 927,562
| null | 6,663
| 6,663
| 8,425
|
Dataset
|
10/24/2017 18:53:43
|
02/06/2018
| 2,049,402
| 429,937
| 3,806
| 727
| 1
|
11/06/2019
| 3,405
|
### Context
These files contain metadata for all 45,000 movies listed in the Full MovieLens Dataset. The dataset consists of movies released on or before July 2017. Data points include cast, crew, plot keywords, budget, revenue, posters, release dates, languages, production companies, countries, TMDB vote counts and vote averages.
This dataset also has files containing 26 million ratings from 270,000 users for all 45,000 movies. Ratings are on a scale of 1-5 and have been obtained from the official GroupLens website.
### Content
This dataset consists of the following files:
**movies_metadata.csv:** The main Movies Metadata file. Contains information on 45,000 movies featured in the Full MovieLens dataset. Features include posters, backdrops, budget, revenue, release dates, languages, production countries and companies.
**keywords.csv:** Contains the movie plot keywords for our MovieLens movies. Available in the form of a stringified JSON Object.
**credits.csv:** Consists of Cast and Crew Information for all our movies. Available in the form of a stringified JSON Object.
**links.csv:** The file that contains the TMDB and IMDB IDs of all the movies featured in the Full MovieLens dataset.
**links_small.csv:** Contains the TMDB and IMDB IDs of a small subset of 9,000 movies of the Full Dataset.
**ratings_small.csv:** The subset of 100,000 ratings from 700 users on 9,000 movies.
The Full MovieLens Dataset consisting of 26 million ratings and 750,000 tag applications from 270,000 users on all the 45,000 movies in this dataset can be accessed [here](https://grouplens.org/datasets/movielens/latest/)
### Acknowledgements
This dataset is an ensemble of data collected from TMDB and GroupLens.
The Movie Details, Credits and Keywords have been collected from the TMDB Open API. This product uses the TMDb API but is not endorsed or certified by TMDb. Their API also provides access to data on many additional movies, actors and actresses, crew members, and TV shows. You can try it for yourself [here](https://www.themoviedb.org/documentation/api).
The Movie Links and Ratings have been obtained from the Official GroupLens website. The files are a part of the dataset available [here](https://grouplens.org/datasets/movielens/latest/)

### Inspiration
This dataset was assembled as part of my second Capstone Project for Springboard's [Data Science Career Track](https://www.springboard.com/workshops/data-science-career-track). I wanted to perform an extensive EDA on Movie Data to narrate the history and the story of Cinema and use this metadata in combination with MovieLens ratings to build various types of Recommender Systems.
Both my notebooks are available as kernels with this dataset: [The Story of Film](https://www.kaggle.com/rounakbanik/the-story-of-film) and [Movie Recommender Systems](https://www.kaggle.com/rounakbanik/movie-recommender-systems)
Some of the things you can do with this dataset:
Predicting movie revenue and/or movie success based on a certain metric. What movies tend to get higher vote counts and vote averages on TMDB? Building Content Based and Collaborative Filtering Based Recommendation Engines.
|
The Movies Dataset
|
Metadata on over 45,000 movies. 26 million ratings from over 270,000 users.
|
CC0: Public Domain
| 7
| 7
| null | 0
| 0.169523
| 0.069502
| 0.432041
| 0.095595
| 0.191665
| 0.175352
| 23
|
8,751
| 894
| 495,305
| null | 485
| 813,759
| 836,098
| 2,741
|
Dataset
|
02/28/2017 15:00:38
|
02/06/2018
| 1,919,530
| 365,741
| 4,394
| 1,169
| 1
|
11/06/2019
| 894
|
### Context
The World Happiness Report is a landmark survey of the state of global happiness. The first report was published in 2012, the second in 2013, the third in 2015, and the fourth in the 2016 Update. The World Happiness 2017, which ranks 155 countries by their happiness levels, was released at the United Nations at an event celebrating International Day of Happiness on March 20th. The report continues to gain global recognition as governments, organizations and civil society increasingly use happiness indicators to inform their policy-making decisions. Leading experts across fields โ economics, psychology, survey analysis, national statistics, health, public policy and more โ describe how measurements of well-being can be used effectively to assess the progress of nations. The reports review the state of happiness in the world today and show how the new science of happiness explains personal and national variations in happiness.
### Content
The happiness scores and rankings use data from the Gallup World Poll. The scores are based on answers to the main life evaluation question asked in the poll. This question, known as the Cantril ladder, asks respondents to think of a ladder with the best possible life for them being a 10 and the worst possible life being a 0 and to rate their own current lives on that scale. The scores are from nationally representative samples for the years 2013-2016 and use the Gallup weights to make the estimates representative. The columns following the happiness score estimate the extent to which each of six factors โ economic production, social support, life expectancy, freedom, absence of corruption, and generosity โ contribute to making life evaluations higher in each country than they are in Dystopia, a hypothetical country that has values equal to the worldโs lowest national averages for each of the six factors. They have no impact on the total score reported for each country, but they do explain why some countries rank higher than others.
### Inspiration
What countries or regions rank the highest in overall happiness and each of the six factors contributing to happiness? How did country ranks or scores change between the 2015 and 2016 as well as the 2016 and 2017 reports? Did any country experience a significant increase or decrease in happiness?
**What is Dystopia?**
Dystopia is an imaginary country that has the worldโs least-happy people. The purpose in establishing Dystopia is to have a benchmark against which all countries can be favorably compared (no country performs more poorly than Dystopia) in terms of each of the six key variables, thus allowing each sub-bar to be of positive width. The lowest scores observed for the six key variables, therefore, characterize Dystopia. Since life would be very unpleasant in a country with the worldโs lowest incomes, lowest life expectancy, lowest generosity, most corruption, least freedom and least social support, it is referred to as โDystopia,โ in contrast to Utopia.
**What are the residuals?**
The residuals, or unexplained components, differ for each country, reflecting the extent to which the six variables either over- or under-explain average 2014-2016 life evaluations. These residuals have an average value of approximately zero over the whole set of countries. Figure 2.2 shows the average residual for each country when the equation in Table 2.1 is applied to average 2014- 2016 data for the six variables in that country. We combine these residuals with the estimate for life evaluations in Dystopia so that the combined bar will always have positive values. As can be seen in Figure 2.2, although some life evaluation residuals are quite large, occasionally exceeding one point on the scale from 0 to 10, they are always much smaller than the calculated value in Dystopia, where the average life is rated at 1.85 on the 0 to 10 scale.
**What do the columns succeeding the Happiness Score(like Family, Generosity, etc.) describe?**
The following columns: GDP per Capita, Family, Life Expectancy, Freedom, Generosity, Trust Government Corruption describe the extent to which these factors contribute in evaluating the happiness in each country.
The Dystopia Residual metric actually is the Dystopia Happiness Score(1.85) + the Residual value or the unexplained value for each country as stated in the previous answer.
If you add all these factors up, you get the happiness score so it might be un-reliable to model them to predict Happiness Scores.
#[Start a new kernel][1]
[1]: https://www.kaggle.com/unsdsn/world-happiness/kernels?modal=true
|
World Happiness Report
|
Happiness scored according to economic production, social support, etc.
|
CC0: Public Domain
| 2
| 3
| null | 0
| 0.15878
| 0.08024
| 0.367531
| 0.153715
| 0.190066
| 0.174009
| 24
|
31,538
| 134,715
| 2,483,565
| 2,483,565
| null | 320,111
| 333,307
| 144,904
|
Dataset
|
03/09/2019 06:32:21
|
03/09/2019
| 1,378,531
| 340,277
| 1,518
| 1,713
| 1
|
07/26/2020
| 134,715
|
IMDB dataset having 50K movie reviews for natural language processing or Text analytics.
This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training and 25,000 for testing. So, predict the number of positive and negative reviews using either classification or deep learning algorithms.
For more dataset information, please go through the following link,
http://ai.stanford.edu/~amaas/data/sentiment/
|
IMDB Dataset of 50K Movie Reviews
|
Large Movie Review Dataset
|
Other (specified in description)
| 1
| 1
| null | 0
| 0.11403
| 0.02772
| 0.341943
| 0.225247
| 0.177235
| 0.163168
| 25
|
6,965
| 63
| 655,525
| 655,525
| null | 589
| 589
| 1,357
|
Dataset
|
07/09/2016 13:40:34
|
02/06/2018
| 1,858,807
| 255,810
| 4,796
| 1,597
| 1
|
11/06/2019
| 63
|
The ultimate Soccer database for data analysis and machine learning
-------------------------------------------------------------------
**What you get:**
- +25,000 matches
- +10,000 players
- 11 European Countries with their lead championship
- Seasons 2008 to 2016
- Players and Teams' attributes* sourced from EA Sports' FIFA video game series, including the weekly updates
- Team line up with squad formation (X, Y coordinates)
- Betting odds from up to 10 providers
- Detailed match events (goal types, possession, corner, cross, fouls, cards etc...) for +10,000 matches
**16th Oct 2016: New table containing teams' attributes from FIFA !*
----------
**Original Data Source:**
You can easily find data about soccer matches but they are usually scattered across different websites. A thorough data collection and processing has been done to make your life easier. **I must insist that you do not make any commercial use of the data**. The data was sourced from:
- [http://football-data.mx-api.enetscores.com/][1] : scores, lineup, team formation and events
- [http://www.football-data.co.uk/][2] : betting odds. [Click here to understand the column naming system for betting odds:][3]
- [http://sofifa.com/][4] : players and teams attributes from EA Sports FIFA games. *FIFA series and all FIFA assets property of EA Sports.*
> When you have a look at the database, you will notice foreign keys for
> players and matches are the same as the original data sources. I have
> called those foreign keys "api_id".
----------
**Improving the dataset:**
You will notice that some players are missing from the lineup (NULL values). This is because I have not been able to source their attributes from FIFA. This will be fixed overtime as the crawling algorithm is being improved.
The dataset will also be expanded to include international games, national cups, Champion's League and Europa League. Please ask me if you're after a specific tournament.
> Please get in touch with me if you want to help improve this dataset.
[CLICK HERE TO ACCESS THE PROJECT GITHUB][5]
*Important note for people interested in using the crawlers:* since I first wrote the crawling scripts (in python), it appears sofifa.com has changed its design and with it comes new requirements for the scripts. The existing script to crawl players ('Player Spider') will not work until i've updated it.
----------
Exploring the data:
Now that's the fun part, there is a lot you can do with this dataset. I will be adding visuals and insights to this overview page but please have a look at the kernels and give it a try yourself ! Here are some ideas for you:
**The Holy Grail...**
... is obviously to predict the outcome of the game. The bookies use 3 classes (Home Win, Draw, Away Win). They get it right about 53% of the time. This is also what I've achieved so far using my own SVM. Though it may sound high for such a random sport game, you've got to know
that the home team wins about 46% of the time. So the base case (constantly predicting Home Win) has indeed 46% precision.
**Probabilities vs Odds**
When running a multi-class classifier like SVM you could also output a probability estimate and compare it to the betting odds. Have a look at your variance vs odds and see for what games you had very different predictions.
**Explore and visualize features**
With access to players and teams attributes, team formations and in-game events you should be able to produce some interesting insights into [The Beautiful Game][6] . Who knows, Guardiola himself may hire one of you some day!
[1]: http://football-data.mx-api.enetscores.com/
[2]: http://www.football-data.co.uk/
[3]: http://www.football-data.co.uk/notes.txt
[4]: http://sofifa.com/
[5]: https://github.com/hugomathien/football-data-collection/tree/master/footballData
[6]: https://en.wikipedia.org/wiki/The_Beautiful_Game
|
European Soccer Database
|
25k+ matches, players & teams attributes for European Professional Football
|
Database: Open Database, Contents: ยฉ Original Authors
| 10
| 10
| null | 0
| 0.153757
| 0.087581
| 0.257062
| 0.209993
| 0.177098
| 0.163052
| 26
|
14,349
| 2,321
| 1,236,717
| 1,236,717
| null | 3,919
| 3,919
| 6,243
|
Dataset
|
09/04/2017 03:09:09
|
02/05/2018
| 456,183
| 58,647
| 1,394
| 4,433
| 1
|
09/02/2020
| 2,321
|
**General Info**
This is a set of just over 20,000 games collected from a selection of users on the site Lichess.org, and how to collect more. I will also upload more games in the future as I collect them. This set contains the:
- Game ID;
- Rated (T/F);
- Start Time;
- End Time;
- Number of Turns;
- Game Status;
- Winner;
- Time Increment;
- White Player ID;
- White Player Rating;
- Black Player ID;
- Black Player Rating;
- All Moves in Standard Chess Notation;
- Opening Eco (Standardised Code for any given opening, [list here][1]);
- Opening Name;
- Opening Ply (Number of moves in the opening phase)
For each of these separate games from Lichess. I collected this data using the [Lichess API][2], which enables collection of any given users game history. The difficult part was collecting usernames to use, however the API also enables dumping of all users in a Lichess team. There are several teams on Lichess with over 1,500 players, so this proved an effective way to get users to collect games from.
**Possible Uses**
Lots of information is contained within a single chess game, let alone a full dataset of multiple games. It is primarily a game of patterns, and data science is all about detecting patterns in data, which is why chess has been one of the most invested in areas of AI in the past. This dataset collects all of the information available from 20,000 games and presents it in a format that is easy to process for analysis of, for example, what allows a player to win as black or white, how much meta (out-of-game) factors affect a game, the relationship between openings and victory for black and white and more.
[1]: https://www.365chess.com/eco.php
[2]: https://github.com/ornicar/lila
|
Chess Game Dataset (Lichess)
|
20,000+ Lichess Games, including moves, victor, rating, opening details and more
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.037735
| 0.025456
| 0.058934
| 0.582906
| 0.176258
| 0.162338
| 27
|
27,548
| 49,864
| 2,115,707
| 2,115,707
| null | 274,957
| 287,262
| 58,489
|
Dataset
|
09/04/2018 18:19:51
|
09/04/2018
| 1,998,685
| 282,267
| 5,047
| 1,146
| 1
|
11/06/2019
| 49,864
|
# [ADVISORY] IMPORTANT #
## Instructions for citation:
If you use this dataset anywhere in your work, kindly cite as the below:
L. Gupta, "Google Play Store Apps," Feb 2019. [Online]. Available: https://www.kaggle.com/lava18/google-play-store-apps
### Context
While many public datasets (on Kaggle and the like) provide Apple App Store data, there are not many counterpart datasets available for Google Play Store apps anywhere on the web. On digging deeper, I found out that iTunes App Store page deploys a nicely indexed appendix-like structure to allow for simple and easy web scraping. On the other hand, Google Play Store uses sophisticated modern-day techniques (like dynamic page load) using JQuery making scraping more challenging.
### Content
Each app (row) has values for catergory, rating, size, and more.
### Acknowledgements
This information is scraped from the Google Play Store. This app information would not be available without it.
### Inspiration
The Play Store apps data has enormous potential to drive app-making businesses to success. Actionable insights can be drawn for developers to work on and capture the Android market!
|
Google Play Store Apps
|
Data of 10k Play Store apps for analysing the Android market.
|
CC BY-SA 4.0
| 6
| 6
| null | 0
| 0.165328
| 0.092164
| 0.283649
| 0.15069
| 0.172958
| 0.159528
| 28
|
8,121
| 9,366
| 753,574
| 753,574
| null | 13,206
| 13,206
| 16,662
|
Dataset
|
01/11/2018 16:04:39
|
02/05/2018
| 330,693
| 52,726
| 957
| 4,476
| 1
|
12/15/2020
| 9,366
|
### Context
The Ramen Rater is a product review website for the hardcore ramen enthusiast (or "ramenphile"), with over 2500 reviews to date. This dataset is an export of "The Big List" (of reviews), converted to a CSV format.
### Content
Each record in the dataset is a single ramen product review. Review numbers are contiguous: more recently reviewed ramen varieties have higher numbers. Brand, Variety (the product name), Country, and Style (Cup? Bowl? Tray?) are pretty self-explanatory. Stars indicate the ramen quality, as assessed by the reviewer, on a 5-point scale; this is the most important column in the dataset!
Note that this dataset does *not* include the text of the reviews themselves. For that, you should browse through https://www.theramenrater.com/ instead!
### Acknowledgements
This dataset is republished as-is from the original [BIG LIST](https://www.theramenrater.com/resources-2/the-list/) on https://www.theramenrater.com/.
### Inspiration
* What ingredients or flavors are most commonly advertised on ramen package labels?
* How do ramen ratings compare against ratings for other food products (like, say, wine)?
* How is ramen manufacturing internationally distributed?
|
Ramen Ratings
|
Over 2500 ramen ratings
|
Data files ยฉ Original Authors
| 1
| 1
| null | 0
| 0.027354
| 0.017476
| 0.052984
| 0.58856
| 0.171594
| 0.158365
| 29
|
18,807
| 55,151
| 1,549,225
| null | 1,942
| 2,669,146
| 2,713,424
| 63,908
|
Dataset
|
09/21/2018 15:23:38
|
09/21/2018
| 1,796,403
| 381,368
| 3,704
| 616
| 1
|
11/06/2019
| 55,151
|
# Brazilian E-Commerce Public Dataset by Olist
Welcome! This is a Brazilian ecommerce public dataset of orders made at [Olist Store](http://www.olist.com). The dataset has information of 100k orders from 2016 to 2018 made at multiple marketplaces in Brazil. Its features allows viewing an order from multiple dimensions: from order status, price, payment and freight performance to customer location, product attributes and finally reviews written by customers. We also released a geolocation dataset that relates Brazilian zip codes to lat/lng coordinates.
This is real commercial data, it has been anonymised, and references to the companies and partners in the review text have been replaced with the names of Game of Thrones great houses.
## Join it With the Marketing Funnel by Olist
We have also released a [Marketing Funnel Dataset](https://www.kaggle.com/olistbr/marketing-funnel-olist/home). You may join both datasets and see an order from Marketing perspective now!
**Instructions on joining are available on this [Kernel](https://www.kaggle.com/andresionek/joining-marketing-funnel-with-brazilian-e-commerce).**
## Context
This dataset was generously provided by Olist, the largest department store in Brazilian marketplaces. Olist connects small businesses from all over Brazil to channels without hassle and with a single contract. Those merchants are able to sell their products through the Olist Store and ship them directly to the customers using Olist logistics partners. See more on our website: [www.olist.com](https://www.olist.com)
After a customer purchases the product from Olist Store a seller gets notified to fulfill that order. Once the customer receives the product, or the estimated delivery date is due, the customer gets a satisfaction survey by email where he can give a note for the purchase experience and write down some comments.
### Attention
1. An order might have multiple items.
2. Each item might be fulfilled by a distinct seller.
3. All text identifying stores and partners where replaced by the names of Game of Thrones great houses.
### Example of a product listing on a marketplace

## Data Schema
The data is divided in multiple datasets for better understanding and organization. Please refer to the following data schema when working with it:

## Classified Dataset
We had previously released a classified dataset, but we removed it at *Version 6*. We intend to release it again as a new dataset with a new data schema. While we don't finish it, you may use the classified dataset available at the *Version 5* or previous.
## Inspiration
Here are some inspiration for possible outcomes from this dataset.
**NLP:** <br>
This dataset offers a supreme environment to parse out the reviews text through its multiple dimensions.
**Clustering:**<br>
Some customers didn't write a review. But why are they happy or mad?
**Sales Prediction:**<br>
With purchase date information you'll be able to predict future sales.
**Delivery Performance:**<br>
You will also be able to work through delivery performance and find ways to optimize delivery times.
**Product Quality:** <br>
Enjoy yourself discovering the products categories that are more prone to customer insatisfaction.
**Feature Engineering:** <br>
Create features from this rich dataset or attach some external public information to it.
## Acknowledgements
Thanks to Olist for releasing this dataset.
|
Brazilian E-Commerce Public Dataset by Olist
|
100,000 Orders with product, customer and reviews info
|
CC BY-NC-SA 4.0
| 2
| 8
| null | 0
| 0.148595
| 0.067639
| 0.383235
| 0.080999
| 0.170117
| 0.157104
| 30
|
15,194
| 494,766
| 1,302,389
| 1,302,389
| null | 1,402,868
| 1,435,700
| 507,860
|
Dataset
|
01/30/2020 14:46:58
|
01/30/2020
| 1,465,690
| 397,014
| 2,681
| 749
| 1
|
03/18/2020
| 494,766
|
[](https://forthebadge.com) [](https://forthebadge.com)
### Context
- A new coronavirus designated 2019-nCoV was first identified in Wuhan, the capital of China's Hubei province
- People developed pneumonia without a clear cause and for which existing vaccines or treatments were not effective.
- The virus has shown evidence of human-to-human transmission
- Transmission rate (rate of infection) appeared to escalate in mid-January 2020
- As of 30 January 2020, approximately 8,243 cases have been confirmed
### Content
> * **full_grouped.csv** - Day to day country wise no. of cases (Has County/State/Province level data)
> * **covid_19_clean_complete.csv** - Day to day country wise no. of cases (Doesn't have County/State/Province level data)
> * **country_wise_latest.csv** - Latest country level no. of cases
> * **day_wise.csv** - Day wise no. of cases (Doesn't have country level data)
> * **usa_county_wise.csv** - Day to day county level no. of cases
> * **worldometer_data.csv** - Latest data from https://www.worldometers.info/
### Acknowledgements / Data Source
> https://github.com/CSSEGISandData/COVID-19
> https://www.worldometers.info/
### Collection methodology
> https://github.com/imdevskp/covid_19_jhu_data_web_scrap_and_cleaning
### Cover Photo
> Photo from National Institutes of Allergy and Infectious Diseases
> https://www.niaid.nih.gov/news-events/novel-coronavirus-sarscov2-images
> https://blogs.cdc.gov/publichealthmatters/2019/04/h1n1/
### Similar Datasets
> * COVID-19 - https://www.kaggle.com/imdevskp/corona-virus-report
> * MERS - https://www.kaggle.com/imdevskp/mers-outbreak-dataset-20122019
> * Ebola Western Africa 2014 Outbreak - https://www.kaggle.com/imdevskp/ebola-outbreak-20142016-complete-dataset
> * H1N1 | Swine Flu 2009 Pandemic Dataset - https://www.kaggle.com/imdevskp/h1n1-swine-flu-2009-pandemic-dataset
> * SARS 2003 Pandemic - https://www.kaggle.com/imdevskp/sars-outbreak-2003-complete-dataset
> * HIV AIDS - https://www.kaggle.com/imdevskp/hiv-aids-dataset
|
COVID-19 Dataset
|
Number of Confirmed, Death and Recovered cases every day across the globe
|
Other (specified in description)
| 166
| 166
| null | 0
| 0.121239
| 0.048958
| 0.398957
| 0.098488
| 0.166911
| 0.15436
| 31
|
13,189
| 2,478
| 1,162,990
| 1,162,990
| null | 1,151,655
| 1,182,398
| 6,555
|
Dataset
|
09/13/2017 22:41:53
|
02/05/2018
| 417,677
| 40,117
| 1,380
| 4,290
| 1
|
02/24/2020
| 2,478
|
### Context:
This data publication contains a spatial database of wildfires that occurred in the United States from 1992 to 2015. It is the third update of a publication originally generated to support the national Fire Program Analysis (FPA) system. The wildfire records were acquired from the reporting systems of federal, state, and local fire organizations. The following core data elements were required for records to be included in this data publication: discovery date, final fire size, and a point location at least as precise as Public Land Survey System (PLSS) section (1-square mile grid). The data were transformed to conform, when possible, to the data standards of the National Wildfire Coordinating Group (NWCG). Basic error-checking was performed and redundant records were identified and removed, to the degree possible. The resulting product, referred to as the Fire Program Analysis fire-occurrence database (FPA FOD), includes 1.88 million geo-referenced wildfire records, representing a total of 140 million acres burned during the 24-year period.
### Content:
This dataset is an SQLite database that contains the following information:
* Fires: Table including wildfire data for the period of 1992-2015 compiled from US federal, state, and local reporting systems.
* FOD_ID = Global unique identifier.
* FPA_ID = Unique identifier that contains information necessary to track back to the original record in the source dataset.
* SOURCE_SYSTEM_TYPE = Type of source database or system that the record was drawn from (federal, nonfederal, or interagency).
* SOURCE_SYSTEM = Name of or other identifier for source database or system that the record was drawn from. See Table 1 in Short (2014), or \Supplements\FPA_FOD_source_list.pdf, for a list of sources and their identifier.
* NWCG_REPORTING_AGENCY = Active National Wildlife Coordinating Group (NWCG) Unit Identifier for the agency preparing the fire report (BIA = Bureau of Indian Affairs, BLM = Bureau of Land Management, BOR = Bureau of Reclamation, DOD = Department of Defense, DOE = Department of Energy, FS = Forest Service, FWS = Fish and Wildlife Service, IA = Interagency Organization, NPS = National Park Service, ST/C&L = State, County, or Local Organization, and TRIBE = Tribal Organization).
* NWCG_REPORTING_UNIT_ID = Active NWCG Unit Identifier for the unit preparing the fire report.
* NWCG_REPORTING_UNIT_NAME = Active NWCG Unit Name for the unit preparing the fire report.
* SOURCE_REPORTING_UNIT = Code for the agency unit preparing the fire report, based on code/name in the source dataset.
* SOURCE_REPORTING_UNIT_NAME = Name of reporting agency unit preparing the fire report, based on code/name in the source dataset.
* LOCAL_FIRE_REPORT_ID = Number or code that uniquely identifies an incident report for a particular reporting unit and a particular calendar year.
* LOCAL_INCIDENT_ID = Number or code that uniquely identifies an incident for a particular local fire management organization within a particular calendar year.
* FIRE_CODE = Code used within the interagency wildland fire community to track and compile cost information for emergency fire suppression (https://www.firecode.gov/).
* FIRE_NAME = Name of the incident, from the fire report (primary) or ICS-209 report (secondary).
* ICS_209_INCIDENT_NUMBER = Incident (event) identifier, from the ICS-209 report.
* ICS_209_NAME = Name of the incident, from the ICS-209 report.
* MTBS_ID = Incident identifier, from the MTBS perimeter dataset.
* MTBS_FIRE_NAME = Name of the incident, from the MTBS perimeter dataset.
* COMPLEX_NAME = Name of the complex under which the fire was ultimately managed, when discernible.
* FIRE_YEAR = Calendar year in which the fire was discovered or confirmed to exist.
* DISCOVERY_DATE = Date on which the fire was discovered or confirmed to exist.
* DISCOVERY_DOY = Day of year on which the fire was discovered or confirmed to exist.
* DISCOVERY_TIME = Time of day that the fire was discovered or confirmed to exist.
* STAT_CAUSE_CODE = Code for the (statistical) cause of the fire.
* STAT_CAUSE_DESCR = Description of the (statistical) cause of the fire.
* CONT_DATE = Date on which the fire was declared contained or otherwise controlled (mm/dd/yyyy where mm=month, dd=day, and yyyy=year).
* CONT_DOY = Day of year on which the fire was declared contained or otherwise controlled.
* CONT_TIME = Time of day that the fire was declared contained or otherwise controlled (hhmm where hh=hour, mm=minutes).
* FIRE_SIZE = Estimate of acres within the final perimeter of the fire.
* FIRE_SIZE_CLASS = Code for fire size based on the number of acres within the final fire perimeter expenditures (A=greater than 0 but less than or equal to 0.25 acres, B=0.26-9.9 acres, C=10.0-99.9 acres, D=100-299 acres, E=300 to 999 acres, F=1000 to 4999 acres, and G=5000+ acres).
* LATITUDE = Latitude (NAD83) for point location of the fire (decimal degrees).
* LONGITUDE = Longitude (NAD83) for point location of the fire (decimal degrees).
* OWNER_CODE = Code for primary owner or entity responsible for managing the land at the point of origin of the fire at the time of the incident.
* OWNER_DESCR = Name of primary owner or entity responsible for managing the land at the point of origin of the fire at the time of the incident.
* STATE = Two-letter alphabetic code for the state in which the fire burned (or originated), based on the nominal designation in the fire report.
* COUNTY = County, or equivalent, in which the fire burned (or originated), based on nominal designation in the fire report.
* FIPS_CODE = Three-digit code from the Federal Information Process Standards (FIPS) publication 6-4 for representation of counties and equivalent entities.
* FIPS_NAME = County name from the FIPS publication 6-4 for representation of counties and equivalent entities.
* NWCG_UnitIDActive_20170109: Look-up table containing all NWCG identifiers for agency units that were active (i.e., valid) as of 9 January 2017, when the list was downloaded from https://www.nifc.blm.gov/unit_id/Publish.html and used as the source of values available to populate the following fields in the Fires table: NWCG_REPORTING_AGENCY, NWCG_REPORTING_UNIT_ID, and NWCG_REPORTING_UNIT_NAME.
* UnitId = NWCG Unit ID.
* GeographicArea = Two-letter code for the geographic area in which the unit is located (NA=National, IN=International, AK=Alaska, CA=California, EA=Eastern Area, GB=Great Basin, NR=Northern Rockies, NW=Northwest, RM=Rocky Mountain, SA=Southern Area, and SW=Southwest).
* Gacc = Seven or eight-letter code for the Geographic Area Coordination Center in which the unit is located or primarily affiliated with (CAMBCIFC=Canadian Interagency Forest Fire Centre, USAKCC=Alaska Interagency Coordination Center, USCAONCC=Northern California Area Coordination Center, USCAOSCC=Southern California Coordination Center, USCORMCC=Rocky Mountain Area Coordination Center, USGASAC=Southern Area Coordination Center, USIDNIC=National Interagency Coordination Center, USMTNRC=Northern Rockies Coordination Center, USNMSWC=Southwest Area Coordination Center, USORNWC=Northwest Area Coordination Center, USUTGBC=Western Great Basin Coordination Center, USWIEACC=Eastern Area Coordination Center).
* WildlandRole = Role of the unit within the wildland fire community.
* UnitType = Type of unit (e.g., federal, state, local).
* Department = Department (or state/territory) to which the unit belongs (AK=Alaska, AL=Alabama, AR=Arkansas, AZ=Arizona, CA=California, CO=Colorado, CT=Connecticut, DE=Delaware, DHS=Department of Homeland Security, DOC= Department of Commerce, DOD=Department of Defense, DOE=Department of Energy, DOI= Department of Interior, DOL=Department of Labor, FL=Florida, GA=Georgia, IA=Iowa, IA/GC=Non-Departmental Agencies, ID=Idaho, IL=Illinois, IN=Indiana, KS=Kansas, KY=Kentucky, LA=Louisiana, MA=Massachusetts, MD=Maryland, ME=Maine, MI=Michigan, MN=Minnesota, MO=Missouri, MS=Mississippi, MT=Montana, NC=North Carolina, NE=Nebraska, NG=Non-Government, NH=New Hampshire, NJ=New Jersey, NM=New Mexico, NV=Nevada, NY=New York, OH=Ohio, OK=Oklahoma, OR=Oregon, PA=Pennsylvania, PR=Puerto Rico, RI=Rhode Island, SC=South Carolina, SD=South Dakota, ST/L=State or Local Government, TN=Tennessee, Tribe=Tribe, TX=Texas, USDA=Department of Agriculture, UT=Utah, VA=Virginia, VI=U. S. Virgin Islands, VT=Vermont, WA=Washington, WI=Wisconsin, WV=West Virginia, WY=Wyoming).
* Agency = Agency or bureau to which the unit belongs (AG=Air Guard, ANC=Alaska Native Corporation, BIA=Bureau of Indian Affairs, BLM=Bureau of Land Management, BOEM=Bureau of Ocean Energy Management, BOR=Bureau of Reclamation, BSEE=Bureau of Safety and Environmental Enforcement, C&L=County & Local, CDF=California Department of Forestry & Fire Protection, DC=Department of Corrections, DFE=Division of Forest Environment, DFF=Division of Forestry Fire & State Lands, DFL=Division of Forests and Land, DFR=Division of Forest Resources, DL=Department of Lands, DNR=Department of Natural Resources, DNRC=Department of Natural Resources and Conservation, DNRF=Department of Natural Resources Forest Service, DOA=Department of Agriculture, DOC=Department of Conservation, DOE=Department of Energy, DOF=Department of Forestry, DVF=Division of Forestry, DWF=Division of Wildland Fire, EPA=Environmental Protection Agency, FC=Forestry Commission, FEMA=Federal Emergency Management Agency, FFC=Bureau of Forest Fire Control, FFP=Forest Fire Protection, FFS=Forest Fire Service, FR=Forest Rangers, FS=Forest Service, FWS=Fish & Wildlife Service, HQ=Headquarters, JC=Job Corps, NBC=National Business Center, NG=National Guard, NNSA=National Nuclear Security Administration, NPS=National Park Service, NWS=National Weather Service, OES=Office of Emergency Services, PRI=Private, SF=State Forestry, SFS=State Forest Service, SP=State Parks, TNC=The Nature Conservancy, USA=United States Army, USACE=United States Army Corps of Engineers, USAF=United States Air Force, USGS=United States Geological Survey, USN=United States Navy).
* Parent = Agency subgroup to which the unit belongs (A concatenation of State and Unit from this report - https://www.nifc.blm.gov/unit_id/publish/UnitIdReport.rtf).
* Country = Country in which the unit is located (e.g. US = United States).
* State = Two-letter code for the state in which the unit is located (or primarily affiliated).
* Code = Unit code (follows state code to create UnitId).
* Name = Unit name.
### Acknowledgements:
These data were collected using funding from the U.S. Government and can be used without additional permissions or fees. If you use these data in a publication, presentation, or other research product please use the following citation:
Short, Karen C. 2017. Spatial wildfire occurrence data for the United States, 1992-2015 [FPA_FOD_20170508]. 4th Edition. Fort Collins, CO: Forest Service Research Data Archive. https://doi.org/10.2737/RDS-2013-0009.4
### Inspiration:
* Have wildfires become more or less frequent over time?
* What counties are the most and least fire-prone?
* Given the size, location and date, can you predict the cause of a fire wildfire?
|
1.88 Million US Wildfires
|
24 years of geo-referenced wildfire records
|
CC0: Public Domain
| 2
| 2
| null | 0
| 0.034549
| 0.0252
| 0.040313
| 0.564103
| 0.166041
| 0.153615
| 32
|
4,397
| 483
| 395,512
| null | 7
| 982
| 982
| 2,105
|
Dataset
|
12/02/2016 19:29:17
|
02/06/2018
| 1,167,154
| 309,098
| 1,735
| 1,631
| 1
|
11/06/2019
| 483
|
## Context
The SMS Spam Collection is a set of SMS tagged messages that have been collected for SMS Spam research. It contains one set of SMS messages in English of 5,574 messages, tagged acording being ham (legitimate) or spam.
## Content
The files contain one message per line. Each line is composed by two columns: v1 contains the label (ham or spam) and v2 contains the raw text.
This corpus has been collected from free or free for research sources at the Internet:
-> A collection of 425 SMS spam messages was manually extracted from the Grumbletext Web site. This is a UK forum in which cell phone users make public claims about SMS spam messages, most of them without reporting the very spam message received. The identification of the text of spam messages in the claims is a very hard and time-consuming task, and it involved carefully scanning hundreds of web pages. The Grumbletext Web site is:[ \[Web Link\]][1].
-> A subset of 3,375 SMS randomly chosen ham messages of the NUS SMS Corpus (NSC), which is a dataset of about 10,000 legitimate messages collected for research at the Department of Computer Science at the National University of Singapore. The messages largely originate from Singaporeans and mostly from students attending the University. These messages were collected from volunteers who were made aware that their contributions were going to be made publicly available. The NUS SMS Corpus is avalaible at:[ \[Web Link\]][2].
-> A list of 450 SMS ham messages collected from Caroline Tag's PhD Thesis available at[ \[Web Link\]][3].
-> Finally, we have incorporated the SMS Spam Corpus v.0.1 Big. It has 1,002 SMS ham messages and 322 spam messages and it is public available at:[ \[Web Link\]][4]. This corpus has been used in the following academic researches:
## Acknowledgements
The original dataset can be found [here](https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection). The creators would like to note that in case you find the dataset useful, please make a reference to previous paper and the web page: http://www.dt.fee.unicamp.br/~tiago/smsspamcollection/ in your papers, research, etc.
We offer a comprehensive study of this corpus in the following paper. This work presents a number of statistics, studies and baseline results for several machine learning methods.
Almeida, T.A., Gรยณmez Hidalgo, J.M., Yamakami, A. Contributions to the Study of SMS Spam Filtering: New Collection and Results. Proceedings of the 2011 ACM Symposium on Document Engineering (DOCENG'11), Mountain View, CA, USA, 2011.
## Inspiration
* Can you use this dataset to build a prediction model that will accurately classify which texts are spam?
[1]: http://www.grumbletext.co.uk/
[2]: http://www.comp.nus.edu.sg/~rpnlpir/downloads/corpora/smsCorpus/
[3]: http://etheses.bham.ac.uk/253/1/Tagg09PhD.pdf
[4]: http://www.esp.uem.es/jmgomez/smsspamcorpus/
|
SMS Spam Collection Dataset
|
Collection of SMS messages tagged as spam or legitimate
|
Unknown
| 1
| 1
|
lending, banking, crowdfunding, insurance, investing, currencies and foreign exchange
| 6
| 0.096545
| 0.031683
| 0.310611
| 0.214464
| 0.163326
| 0.151283
| 33
|
4,073
| 10,128
| 360,751
| 360,751
| null | 5,438,389
| 5,512,409
| 17,476
|
Dataset
|
01/17/2018 23:57:36
|
02/04/2018
| 145,239
| 23,871
| 490
| 4,620
| 1
|
01/21/2021
| 10,128
|
#### PAID ADVERTISEMENT
Part 2 of the dataset is complete (for now!) There you'll find data specific to the Supplemental Nutrition Assistance (SNAP) Program. The US SNAP program provides food benefits to low-income families to supplement their grocery budget.
Link: [US Public Food Assistance 2 - SNAP](https://www.kaggle.com/datasets/jpmiller/food-security)
Please click on the โฒ if you find it useful -- it has almost 500 downloads!
## Context
This dataset, Part 1, addresses another US program, the Special Supplemental Nutrition Program for Women, Infants, and Children Program, or simply WIC. The program allocates Federal and State funds to help low-income women and children up to age five who are at nutritional risk. Funds are used to provide supplemental foods, baby formula, health care, and nutrition education.
## Content
Files may include participation data and spending for state programs, and poverty data for each state. Data for WIC covers fiscal years 2013-2016, which is actually October 2012 through September 2016.
## Motivation
My original purpose here is two-fold:
- Explore various aspects of US Public Assistance. Show trends over recent years and better understand differences across state agencies. Although the federal government sponsors the program and provides funding, program are administered at the state level and can widely vary. Indian nations (native Americans) also administer their own programs.
- Share with the Kaggle Community the joy - and pain - of working with government data. Data is often spread across numerous agency sites and comes in a variety of formats. Often the data is provided in Excel, with the files consisting of multiple tabs. Also, files are formatted as reports and contain aggregated data (sums, averages, etc.) along with base data.
As of March 2nd, I am expanding the purpose to support the [M5 Forecasting Challenges](https://www.kaggle.com/c/m5-forecasting-accuracy/overview/evaluation) here on Kaggle. Store sales are partly driven by participation in Public Assistance programs. Participants typically receive the items free of charge. The store then recovers the sale price from the state agencies administering the program.
|
US Public Food Assistance 1 - WIC
|
Where does it come from, who spends it, who gets it.
|
Other (specified in description)
| 9
| 12
| null | 0
| 0.012014
| 0.008948
| 0.023988
| 0.607495
| 0.163111
| 0.151098
| 34
|
8,435
| 655
| 772,431
| 772,431
| null | 1,252
| 1,252
| 2,371
|
Dataset
|
01/13/2017 04:18:10
|
02/06/2018
| 102,863
| 16,300
| 435
| 4,610
| 1
|
07/22/2025
| 655
|
# Context
[Pitchfork](https://pitchfork.com/) is a music-centric online magazine. It was started in 1995 and grew out of independent music reviewing into a general publication format, but is still famed for its variety music reviews. I scraped over 18,000 [Pitchfork][1] reviews (going back to January 1999). Initially, this was done to satisfy a few of [my own curiosities][2], but I bet Kagglers can come up with some really interesting analyses!
# Content
This dataset is provided as a `sqlite` database with the following tables: `artists`, `content`, `genres`, `labels`, `reviews`, `years`. For column-level information on specific tables, refer to the [Metadata tab](https://www.kaggle.com/nolanbconaway/pitchfork-data/data).
# Inspiration
* Do review scores for individual artists generally improve over time, or go down?
* How has Pitchfork's review genre selection changed over time?
* Who are the most highly rated artists? The least highly rated artists?
# Acknowledgements
Gotta love [Beautiful Soup][4]!
[1]: http://pitchfork.com/
[2]: https://github.com/nolanbconaway/pitchfork-data
[3]: https://github.com/nolanbconaway/pitchfork-data/tree/master/scrape
[4]: https://www.crummy.com/software/BeautifulSoup/
|
18,393 Pitchfork Reviews
|
Pitchfork reviews from Jan 5, 1999 to Jan 8, 2017
|
Unknown
| 1
| 1
| null | 0
| 0.008509
| 0.007944
| 0.01638
| 0.60618
| 0.159753
| 0.148207
| 35
|
105,419
| 1,120,859
| 6,402,661
| 6,402,661
| null | 1,882,037
| 1,920,174
| 1,138,221
|
Dataset
|
01/26/2021 19:29:28
|
01/26/2021
| 1,589,502
| 259,569
| 3,378
| 1,375
| 1
|
03/10/2021
| 1,120,859
|
### Similar Datasets
- [**HIGHLIGHTED**] CERN Electron Collision Data โ๏ธ[LINK](https://www.kaggle.com/datasets/fedesoriano/cern-electron-collision-data)
- Hepatitis C Dataset: [LINK](https://www.kaggle.com/fedesoriano/hepatitis-c-dataset)
- Body Fat Prediction Dataset: [LINK](https://www.kaggle.com/fedesoriano/body-fat-prediction-dataset)
- Cirrhosis Prediction Dataset: [LINK](https://www.kaggle.com/fedesoriano/cirrhosis-prediction-dataset)
- Heart Failure Prediction Dataset: [LINK](https://www.kaggle.com/fedesoriano/heart-failure-prediction)
- Stellar Classification Dataset - SDSS17: [LINK](https://www.kaggle.com/fedesoriano/stellar-classification-dataset-sdss17)
- Wind Speed Prediction Dataset: [LINK](https://www.kaggle.com/datasets/fedesoriano/wind-speed-prediction-dataset)
- Spanish Wine Quality Dataset: [LINK](https://www.kaggle.com/datasets/fedesoriano/spanish-wine-quality-dataset)
### Context
According to the World Health Organization (WHO) stroke is the 2nd leading cause of death globally, responsible for approximately 11% of total deaths.
This dataset is used to predict whether a patient is likely to get stroke based on the input parameters like gender, age, various diseases, and smoking status. Each row in the data provides relavant information about the patient.
### Attribute Information
1) id: unique identifier
2) gender: "Male", "Female" or "Other"
3) age: age of the patient
4) hypertension: 0 if the patient doesn't have hypertension, 1 if the patient has hypertension
5) heart\_disease: 0 if the patient doesn't have any heart diseases, 1 if the patient has a heart disease
6) ever\_married: "No" or "Yes"
7) work\_type: "children", "Govt\_jov", "Never\_worked", "Private" or "Self-employed"
8) Residence\_type: "Rural" or "Urban"
9) avg\_glucose\_level: average glucose level in blood
10) bmi: body mass index
11) smoking\_status: "formerly smoked", "never smoked", "smokes" or "Unknown"*
12) stroke: 1 if the patient had a stroke or 0 if not
*Note: "Unknown" in smoking\_status means that the information is unavailable for this patient
### Acknowledgements
**(Confidential Source)** - *Use only for educational purposes*
If you use this dataset in your research, please credit the author.
|
Stroke Prediction Dataset
|
11 clinical features for predicting stroke events
|
Data files ยฉ Original Authors
| 1
| 1
| null | 0
| 0.131481
| 0.061686
| 0.26084
| 0.180802
| 0.158702
| 0.147301
| 36
|
8,144
| 2,894
| 753,574
| null | 147
| 4,877
| 4,877
| 7,514
|
Dataset
|
10/10/2017 18:05:30
|
02/05/2018
| 202,097
| 19,697
| 832
| 4,346
| 1
|
09/04/2020
| 2,894
|
### Context
The Kepler Space Observatory is a NASA-build satellite that was launched in 2009. The telescope is dedicated to searching for exoplanets in star systems besides our own, with the ultimate goal of possibly finding other habitable planets besides our own. The original mission ended in 2013 due to mechanical failures, but the telescope has nevertheless been functional since 2014 on a "K2" extended mission.
Kepler had verified 1284 new exoplanets as of May 2016. As of October 2017 there are over 3000 confirmed exoplanets total (using all detection methods, including ground-based ones). The telescope is still active and continues to collect new data on its extended mission.
### Content
This dataset is a cumulative record of all observed Kepler "objects of interest" — basically, all of the approximately 10,000 exoplanet candidates Kepler has taken observations on.
This dataset has an extensive data dictionary, which can be accessed [here](https://exoplanetarchive.ipac.caltech.edu/docs/API_kepcandidate_columns.html). Highlightable columns of note are:
* `kepoi_name`: A KOI is a target identified by the Kepler Project that displays at least one transit-like sequence within Kepler time-series photometry that appears to be of astrophysical origin and initially consistent with a planetary transit hypothesis
* `kepler_name`: [These names] are intended to clearly indicate a class of objects that have been confirmed or validated as planetsโa step up from the planet candidate designation.
* `koi_disposition`: The disposition in the literature towards this exoplanet candidate. One of CANDIDATE, FALSE POSITIVE, NOT DISPOSITIONED or CONFIRMED.
* `koi_pdisposition`: The disposition Kepler data analysis has towards this exoplanet candidate. One of FALSE POSITIVE, NOT DISPOSITIONED, and CANDIDATE.
* `koi_score `: A value between 0 and 1 that indicates the confidence in the KOI disposition. For CANDIDATEs, a higher value indicates more confidence in its disposition, while for FALSE POSITIVEs, a higher value indicates less confidence in that disposition.
### Acknowledgements
This dataset was published as-is by NASA. You can access the original table [here](https://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblView/nph-tblView?app=ExoTbls&config=koi). More data from the Kepler mission is available from the same source [here](https://exoplanetarchive.ipac.caltech.edu/docs/data.html).
### Inspiration
* How often are exoplanets confirmed in the existing literature disconfirmed by measurements from Kepler? How about the other way round?
* What general characteristics about exoplanets (that we can find) can you derive from this dataset?
* What exoplanets get assigned names in the literature? What is the distribution of confidence scores?
See also: the [Kepler Labeled Time Series](https://www.kaggle.com/keplersmachines/kepler-labelled-time-series-data) and [Open Exoplanets Catalogue](https://www.kaggle.com/mrisdal/open-exoplanet-catalogue) datasets.
|
Kepler Exoplanet Search Results
|
10000 exoplanet candidates examined by the Kepler Space Observatory
|
CC0: Public Domain
| 2
| 2
| null | 0
| 0.016717
| 0.015193
| 0.019793
| 0.571466
| 0.155792
| 0.144786
| 37
|
9,653
| 1,067
| 862,007
| 862,007
| null | 1,925
| 1,925
| 3,045
|
Dataset
|
03/31/2017 06:55:16
|
02/06/2018
| 1,740,485
| 259,953
| 2,776
| 1,177
| 1
|
11/06/2019
| 1,067
|
Uncover the factors that lead to employee attrition and explore important questions such as โshow me a breakdown of distance from home by job role and attritionโ or โcompare average monthly income by education and attritionโ. This is a fictional data set created by IBM data scientists.
Education
1 'Below College'
2 'College'
3 'Bachelor'
4 'Master'
5 'Doctor'
EnvironmentSatisfaction
1 'Low'
2 'Medium'
3 'High'
4 'Very High'
JobInvolvement
1 'Low'
2 'Medium'
3 'High'
4 'Very High'
JobSatisfaction
1 'Low'
2 'Medium'
3 'High'
4 'Very High'
PerformanceRating
1 'Low'
2 'Good'
3 'Excellent'
4 'Outstanding'
RelationshipSatisfaction
1 'Low'
2 'Medium'
3 'High'
4 'Very High'
WorkLifeBalance
1 'Bad'
2 'Good'
3 'Better'
4 'Best'
|
IBM HR Analytics Employee Attrition & Performance
|
Predict attrition of your valuable employees
|
Database: Open Database, Contents: Database Contents
| 1
| 1
|
simulations
| 1
| 0.14397
| 0.050693
| 0.261225
| 0.154767
| 0.152664
| 0.142076
| 38
|
105,413
| 1,582,403
| 6,402,661
| 6,402,661
| null | 2,603,715
| 2,647,230
| 1,602,483
|
Dataset
|
09/10/2021 18:11:57
|
09/10/2021
| 1,375,970
| 236,288
| 2,970
| 1,514
| 1
|
10/13/2021
| 1,582,403
|
### Similar Datasets
- Hepatitis C Dataset: [LINK](https://www.kaggle.com/fedesoriano/hepatitis-c-dataset)
- Body Fat Prediction Dataset: [LINK](https://www.kaggle.com/fedesoriano/body-fat-prediction-dataset)
- Cirrhosis Prediction Dataset: [LINK](https://www.kaggle.com/fedesoriano/cirrhosis-prediction-dataset)
- Stroke Prediction Dataset: [LINK](https://www.kaggle.com/fedesoriano/stroke-prediction-dataset)
- Stellar Classification Dataset - SDSS17: [LINK](https://www.kaggle.com/fedesoriano/stellar-classification-dataset-sdss17)
- Wind Speed Prediction Dataset: [LINK](https://www.kaggle.com/datasets/fedesoriano/wind-speed-prediction-dataset)
- Spanish Wine Quality Dataset: [LINK](https://www.kaggle.com/datasets/fedesoriano/spanish-wine-quality-dataset)
### Context
Cardiovascular diseases (CVDs) are the number 1 cause of death globally, taking an estimated 17.9 million lives each year, which accounts for 31% of all deaths worldwide. Four out of 5CVD deaths are due to heart attacks and strokes, and one-third of these deaths occur prematurely in people under 70 years of age. Heart failure is a common event caused by CVDs and this dataset contains 11 features that can be used to predict a possible heart disease.
People with cardiovascular disease or who are at high cardiovascular risk (due to the presence of one or more risk factors such as hypertension, diabetes, hyperlipidaemia or already established disease) need early detection and management wherein a machine learning model can be of great help.
### Attribute Information
1. Age: age of the patient [years]
1. Sex: sex of the patient [M: Male, F: Female]
1. ChestPainType: chest pain type [TA: Typical Angina, ATA: Atypical Angina, NAP: Non-Anginal Pain, ASY: Asymptomatic]
1. RestingBP: resting blood pressure [mm Hg]
1. Cholesterol: serum cholesterol [mm/dl]
1. FastingBS: fasting blood sugar [1: if FastingBS > 120 mg/dl, 0: otherwise]
1. RestingECG: resting electrocardiogram results [Normal: Normal, ST: having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV), LVH: showing probable or definite left ventricular hypertrophy by Estes' criteria]
1. MaxHR: maximum heart rate achieved [Numeric value between 60 and 202]
1. ExerciseAngina: exercise-induced angina [Y: Yes, N: No]
1. Oldpeak: oldpeak = ST [Numeric value measured in depression]
1. ST_Slope: the slope of the peak exercise ST segment [Up: upsloping, Flat: flat, Down: downsloping]
1. HeartDisease: output class [1: heart disease, 0: Normal]
### Source
This dataset was created by combining different datasets already available independently but not combined before. In this dataset, 5 heart datasets are combined over 11 common features which makes it the largest heart disease dataset available so far for research purposes. The five datasets used for its curation are:
- Cleveland: 303 observations
- Hungarian: 294 observations
- Switzerland: 123 observations
- Long Beach VA: 200 observations
- Stalog (Heart) Data Set: 270 observations
Total: 1190 observations
Duplicated: 272 observations
`Final dataset: 918 observations`
Every dataset used can be found under the Index of heart disease datasets from UCI Machine Learning Repository on the following link: [https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/](https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/)
### Citation
> fedesoriano. (September 2021). Heart Failure Prediction Dataset. Retrieved [Date Retrieved] from https://www.kaggle.com/fedesoriano/heart-failure-prediction.
### Acknowledgements
Creators:
1. Hungarian Institute of Cardiology. Budapest: Andras Janosi, M.D.
1. University Hospital, Zurich, Switzerland: William Steinbrunn, M.D.
1. University Hospital, Basel, Switzerland: Matthias Pfisterer, M.D.
1. V.A. Medical Center, Long Beach and Cleveland Clinic Foundation: Robert Detrano, M.D., Ph.D.
Donor:
David W. Aha (aha '@' ics.uci.edu) (714) 856-8779
|
Heart Failure Prediction Dataset
|
11 clinical features for predicting heart disease events.
|
Database: Open Database, Contents: ยฉ Original Authors
| 1
| 1
| null | 0
| 0.113818
| 0.054236
| 0.237445
| 0.19908
| 0.151144
| 0.140757
| 39
|
8,117
| 3,491
| 753,574
| 753,574
| null | 5,624
| 5,624
| 8,645
|
Dataset
|
10/26/2017 14:10:15
|
02/05/2018
| 97,756
| 14,199
| 301
| 4,367
| 1
|
07/22/2025
| 3,491
|

## Methodology
This is a data dump of the top 100 products (ordered by number of mentions) from every subreddit that has posted an amazon product. The data was extracted from [Google Bigquery's Reddit Comment database](https://bigquery.cloud.google.com/dataset/fh-bigquery:reddit_comments). It only extracts Amazon links, so it is certainly a subset of all products posted to Reddit.
The data is organized in a file structure that follows:
```
reddits/<first lowercase letter of subreddit>/<subreddit>.csv
```
An example of where to find the top products for /r/Watches would be:
```
reddits/w/Watches.csv
```
## Definitions
Below are the column definitions found in each `<subreddit>.csv` file.
**name**
The name of the product as found on Amazon.
**category**
The category of the product as found on Amazon.
**amazon_link**
The link to the product on Amazon.
**total_mentions**
The total number of times that product was found on Reddit.
**subreddit_mentions**
The total number of times that product was found on that subreddit.
## Want more?
You can search and discover products more easily on [ThingsOnReddit](https://thingsonreddit.com/)
## Acknowledgements
This dataset was published by Ben Rudolph on [GitHub](https://github.com/ThingsOnReddit/top-things), and was republished as-is on Kaggle.
|
Things on Reddit
|
The top 100 products in each subreddit from 2015 to 2017
|
Unknown
| 1
| 1
| null | 0
| 0.008086
| 0.005497
| 0.014269
| 0.574227
| 0.15052
| 0.140214
| 40
|
622
| 179,555
| 9,028
| 9,028
| null | 403,916
| 419,024
| 190,343
|
Dataset
|
04/30/2019 21:07:41
|
04/30/2019
| 122,508
| 22,720
| 315
| 4,245
| 1
|
07/22/2025
| 179,555
|
This data is also available at https://www.kaggle.com/open-powerlifting/powerlifting-database
This version of the data was created to ensure a static copy and reproducible results in a Kaggle Learn course.
|
powerlifting-database
|
An unchanging copy of the Powerlifting Database
|
Unknown
| 1
| 1
| null | 0
| 0.010134
| 0.005752
| 0.022831
| 0.558185
| 0.149226
| 0.139088
| 41
|
22,935
| 42,674
| 1,790,645
| 1,790,645
| null | 74,935
| 77,392
| 51,177
|
Dataset
|
08/11/2018 07:23:02
|
08/11/2018
| 1,066,167
| 254,453
| 1,921
| 1,642
| 1
|
02/05/2020
| 42,674
|
### Context
This data set is created only for the learning purpose of the customer segmentation concepts , also known as market basket analysis . I will demonstrate this by using unsupervised ML technique (KMeans Clustering Algorithm) in the simplest form.
### Content
You are owing a supermarket mall and through membership cards , you have some basic data about your customers like Customer ID, age, gender, annual income and spending score.
Spending Score is something you assign to the customer based on your defined parameters like customer behavior and purchasing data.
**Problem Statement**
You own the mall and want to understand the customers like who can be easily converge [Target Customers] so that the sense can be given to marketing team and plan the strategy accordingly.
### Acknowledgements
From Udemy's Machine Learning A-Z course.
I am new to Data science field and want to share my knowledge to others
https://github.com/SteffiPeTaffy/machineLearningAZ/blob/master/Machine%20Learning%20A-Z%20Template%20Folder/Part%204%20-%20Clustering/Section%2025%20-%20Hierarchical%20Clustering/Mall_Customers.csv
### Inspiration
By the end of this case study , you would be able to answer below questions.
1- How to achieve customer segmentation using machine learning algorithm (KMeans Clustering) in Python in simplest way.
2- Who are your target customers with whom you can start marketing strategy [easy to converse]
3- How the marketing strategy works in real world
|
Mall Customer Segmentation Data
|
Market Basket Analysis
|
Other (specified in description)
| 1
| 1
| null | 0
| 0.088191
| 0.03508
| 0.255699
| 0.215911
| 0.14872
| 0.138648
| 42
|
1,763
| 30,292
| 34,547
| 34,547
| null | 38,613
| 40,416
| 38,581
|
Dataset
|
06/06/2018 05:28:35
|
06/06/2018
| 1,520,499
| 330,038
| 3,812
| 502
| 1
|
11/06/2019
| 30,292
|
### Context
It is a well known fact that Millenials LOVE Avocado Toast. It's also a well known fact that all Millenials live in their parents basements.
Clearly, they aren't buying home because they are buying too much Avocado Toast!
But maybe there's hope... if a Millenial could find a city with cheap avocados, they could live out the Millenial American Dream.
### Content
This data was downloaded from the Hass Avocado Board website in May of 2018 & compiled into a single CSV. Here's how the [Hass Avocado Board describes the data on their website][1]:
> The table below represents weekly 2018 retail scan data for National retail volume (units) and price. Retail scan data comes directly from retailersโ cash registers based on actual retail sales of Hass avocados. Starting in 2013, the table below reflects an expanded, multi-outlet retail data set. Multi-outlet reporting includes an aggregation of the following channels: grocery, mass, club, drug, dollar and military. The Average Price (of avocados) in the table reflects a per unit (per avocado) cost, even when multiple units (avocados) are sold in bags. The Product Lookup codes (PLUโs) in the table are only for Hass avocados. Other varieties of avocados (e.g. greenskins) are not included in this table.
Some relevant columns in the dataset:
- `Date` - The date of the observation
- `AveragePrice` - the average price of a single avocado
- `type` - conventional or organic
- `year` - the year
- `Region` - the city or region of the observation
- `Total Volume` - Total number of avocados sold
- `4046` - Total number of avocados with PLU 4046 sold
- `4225` - Total number of avocados with PLU 4225 sold
- `4770` - Total number of avocados with PLU 4770 sold
### Acknowledgements
Many thanks to the Hass Avocado Board for sharing this data!!
http://www.hassavocadoboard.com/retail/volume-and-price-data
### Inspiration
In which cities can millenials have their avocado toast AND buy a home?
Was the Avocadopocalypse of 2017 real?
[1]: http://www.hassavocadoboard.com/retail/volume-and-price-data
|
Avocado Prices
|
Historical data on avocado prices and sales volume in multiple US markets
|
Database: Open Database, Contents: ยฉ Original Authors
| 1
| 1
| null | 0
| 0.125773
| 0.069612
| 0.331653
| 0.066009
| 0.148262
| 0.138249
| 43
|
7,167
| 128
| 680,332
| 680,332
| null | 270
| 270
| 1,447
|
Dataset
|
08/25/2016 15:52:49
|
02/06/2018
| 1,214,889
| 228,904
| 2,273
| 1,449
| 1
|
11/06/2019
| 128
|
This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015.
It's a great dataset for evaluating simple regression models.
|
House Sales in King County, USA
|
Predict house price using regression
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.100493
| 0.041508
| 0.230024
| 0.190533
| 0.140639
| 0.131589
| 44
|
7,508
| 5,227
| 710,779
| 710,779
| null | 7,876
| 7,876
| 11,315
|
Dataset
|
11/24/2017 03:14:59
|
02/06/2018
| 1,037,219
| 257,119
| 1,497
| 1,244
| 1
|
12/16/2020
| 5,227
|
### Context
This is the dataset used in the second chapter of Aurรฉlien Gรฉron's recent book 'Hands-On Machine learning with Scikit-Learn and TensorFlow'. It serves as an excellent introduction to implementing machine learning algorithms because it requires rudimentary data cleaning, has an easily understandable list of variables and sits at an optimal size between being to toyish and too cumbersome.
The data contains information from the 1990 California census. So although it may not help you with predicting current housing prices like the Zillow Zestimate dataset, it does provide an accessible introductory dataset for teaching people about the basics of machine learning.
### Content
The data pertains to the houses found in a given California district and some summary stats about them based on the 1990 census data. Be warned the data aren't cleaned so there are some preprocessing steps required! The columns are as follows, their names are pretty self explanitory:
longitude
latitude
housing_median_age
total_rooms
total_bedrooms
population
households
median_income
median_house_value
ocean_proximity
### Acknowledgements
This data was initially featured in the following paper:
Pace, R. Kelley, and Ronald Barry. "Sparse spatial autoregressions." Statistics & Probability Letters 33.3 (1997): 291-297.
and I encountered it in 'Hands-On Machine learning with Scikit-Learn and TensorFlow' by Aurรฉlien Gรฉron.
Aurรฉlien Gรฉron wrote:
This dataset is a modified version of the California Housing dataset available from:
[Luรญs Torgo's page](http://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.html) (University of Porto)
### Inspiration
See my kernel on machine learning basics in R using this dataset, or venture over to the following link for a python based introductory tutorial: https://github.com/ageron/handson-ml/tree/master/datasets/housing
|
California Housing Prices
|
Median house prices for California districts derived from the 1990 census.
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.085797
| 0.027337
| 0.258378
| 0.163577
| 0.133772
| 0.12555
| 45
|
5,609
| 478
| 495,305
| null | 7
| 974
| 974
| 2,099
|
Dataset
|
12/01/2016 23:08:00
|
02/06/2018
| 1,113,872
| 168,193
| 2,558
| 1,694
| 1
|
11/06/2019
| 478
|
### Context
Although this dataset was originally contributed to the UCI Machine Learning repository nearly 30 years ago, mushroom hunting (otherwise known as "shrooming") is enjoying new peaks in popularity. Learn which features spell certain death and which are most palatable in this dataset of mushroom characteristics. And how certain can your model be?
### Content
This dataset includes descriptions of hypothetical samples corresponding to 23 species of gilled mushrooms in the Agaricus and Lepiota Family Mushroom drawn from The Audubon Society Field Guide to North American Mushrooms (1981). Each species is identified as definitely edible, definitely poisonous, or of unknown edibility and not recommended. This latter class was combined with the poisonous one. The Guide clearly states that there is no simple rule for determining the edibility of a mushroom; no rule like "leaflets three, let it be'' for Poisonous Oak and Ivy.
- **Time period**: Donated to UCI ML 27 April 1987
### Inspiration
- What types of machine learning models perform best on this dataset?
- Which features are most indicative of a poisonous mushroom?
### Acknowledgements
This dataset was originally donated to the UCI Machine Learning repository. You can learn more about past research using the data [here][1].
#[Start a new kernel][2]
[1]: https://archive.ics.uci.edu/ml/datasets/Mushroom
[2]: https://www.kaggle.com/uciml/mushroom-classification/kernels?modal=true
|
Mushroom Classification
|
Safe to eat or deadly poison?
|
CC0: Public Domain
| 1
| 1
|
lending, banking, crowdfunding, insurance, investing, currencies and foreign exchange
| 6
| 0.092137
| 0.046712
| 0.169016
| 0.222748
| 0.132653
| 0.124563
| 46
|
7
| 18
| 500,099
| null | 229
| 2,157
| 2,157
| 993
|
Dataset
|
01/08/2016 21:12:10
|
02/06/2018
| 1,144,926
| 238,492
| 2,385
| 1,107
| 1
|
11/06/2019
| 18
|
## Context
This dataset consists of reviews of fine foods from amazon. The data span a period of more than 10 years, including all ~500,000 reviews up to October 2012. Reviews include product and user information, ratings, and a plain text review. It also includes reviews from all other Amazon categories.
## Contents
- Reviews.csv: Pulled from the corresponding SQLite table named Reviews in database.sqlite<br>
- database.sqlite: Contains the table 'Reviews'<br><br>
Data includes:<br>
- Reviews from Oct 1999 - Oct 2012<br>
- 568,454 reviews<br>
- 256,059 users<br>
- 74,258 products<br>
- 260 users with > 50 reviews<br>
[](https://www.kaggle.com/benhamner/d/snap/amazon-fine-food-reviews/reviews-wordcloud)
## Acknowledgements
See [this SQLite query](https://www.kaggle.com/benhamner/d/snap/amazon-fine-food-reviews/data-sample) for a quick sample of the dataset.
If you publish articles based on this dataset, please cite the following paper:
- J. McAuley and J. Leskovec. [From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews](http://i.stanford.edu/~julian/pdfs/www13.pdf). WWW, 2013.
|
Amazon Fine Food Reviews
|
Analyze ~500,000 food reviews from Amazon
|
CC0: Public Domain
| 2
| 2
| null | 0
| 0.094706
| 0.043553
| 0.239659
| 0.145562
| 0.13087
| 0.122987
| 47
|
36,908
| 268,833
| 3,023,930
| 3,023,930
| null | 611,395
| 630,459
| 280,171
|
Dataset
|
07/18/2019 19:16:23
|
07/18/2019
| 1,320,246
| 224,895
| 3,051
| 882
| 1
|
11/06/2019
| 268,833
|
###Context
Since 2008, guests and hosts have used Airbnb to expand on traveling possibilities and present more unique, personalized way of experiencing the world. This dataset describes the listing activity and metrics in NYC, NY for 2019.
###Content
This data file includes all needed information to find out more about hosts, geographical availability, necessary metrics to make predictions and draw conclusions.
###Acknowledgements
This public dataset is part of Airbnb, and the original source can be found on this [website](http://insideairbnb.com).
###Inspiration
- What can we learn about different hosts and areas?
- What can we learn from predictions? (ex: locations, prices, reviews, etc)
- Which hosts are the busiest and why?
- Is there any noticeable difference of traffic among different areas and what could be the reason for it?
|
New York City Airbnb Open Data
|
Airbnb listings and metrics in NYC, NY, USA (2019)
|
CC0: Public Domain
| 3
| 3
| null | 0
| 0.109208
| 0.055715
| 0.225996
| 0.115976
| 0.126724
| 0.119314
| 48
|
14,762
| 17,860
| 1,272,228
| 1,272,228
| null | 23,404
| 23,408
| 25,592
|
Dataset
|
03/22/2018 15:18:06
|
03/22/2018
| 790,755
| 222,937
| 1,026
| 1,493
| 1
|
07/22/2025
| 17,860
|
### Context
The Iris flower data set is a multivariate data set introduced by the British statistician and biologist Ronald Fisher in his 1936 paper The use of multiple measurements in taxonomic problems. It is sometimes called Anderson's Iris data set because Edgar Anderson collected the data to quantify the morphologic variation of Iris flowers of three related species. The data set consists of 50 samples from each of three species of Iris (Iris Setosa, Iris virginica, and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters.
This dataset became a typical test case for many statistical classification techniques in machine learning such as support vector machines
### Content
The dataset contains a set of 150 records under 5 attributes - Petal Length, Petal Width, Sepal Length, Sepal width and Class(Species).
### Acknowledgements
This dataset is free and is publicly available at the UCI Machine Learning Repository
|
Iris Flower Dataset
|
Iris flower data set used for multi-class classification.
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.06541
| 0.018736
| 0.224028
| 0.196318
| 0.126123
| 0.118781
| 49
|
2,338
| 1,346
| 110,652
| 110,652
| null | 13,233,499
| 13,929,065
| 3,881
|
Dataset
|
06/02/2017 05:09:11
|
02/06/2018
| 1,623,232
| 227,600
| 3,843
| 510
| 1
|
11/06/2019
| 1,346
|
### Context
Bitcoin is the longest running and most well known cryptocurrency, first released as open source in 2009 by the anonymous Satoshi Nakamoto. Bitcoin serves as a decentralized medium of digital exchange, with transactions verified and recorded in a public distributed ledger (the blockchain) without the need for a trusted record keeping authority or central intermediary. Transaction blocks contain a SHA-256 cryptographic hash of previous transaction blocks, and are thus "chained" together, serving as an immutable record of all transactions that have ever occurred. As with any currency/commodity on the market, bitcoin trading and financial instruments soon followed public adoption of bitcoin and continue to grow. Included here is historical bitcoin market data at 1-min intervals for select bitcoin exchanges where trading takes place. Happy (data) mining!
### Content
(See https://github.com/mczielinski/kaggle-bitcoin/ for automation/scraping script)
```
btcusd_1-min_data.csv
```
CSV files for select bitcoin exchanges for the time period of Jan 2012 to Present (Measured by UTC day), with minute to minute updates of OHLC (Open, High, Low, Close) and Volume in BTC.
If a timestamp is missing, or if there are jumps, this may be because the exchange (or its API) was down, the exchange (or its API) did not exist, or some other unforeseen technical error in data reporting or gathering. I'm not perfect, and I'm also busy! All effort has been made to deduplicate entries and verify the contents are correct and complete to the best of my ability, but obviously trust at your own risk.
### Acknowledgements and Inspiration
Bitcoin charts for the data, originally. Now thank you to the Bitstamp API directly. The various exchange APIs, for making it difficult or unintuitive enough to get OHLC and volume data at 1-min intervals that I set out on this data scraping project. Satoshi Nakamoto and the novel core concept of the blockchain, as well as its first execution via the bitcoin protocol. I'd also like to thank viewers like you! Can't wait to see what code or insights you all have to share.
|
Bitcoin Historical Data
|
Bitcoin data at 1-min intervals from select exchanges, Jan 2012 to Present
|
CC BY-SA 4.0
| 375
| 361
| null | 0
| 0.134271
| 0.070178
| 0.228714
| 0.067061
| 0.125056
| 0.117833
| 50
|
2,065
| 557,629
| 71,388
| 71,388
| null | 2,516,524
| 2,559,305
| 571,269
|
Dataset
|
03/16/2020 06:24:37
|
03/16/2020
| 1,014,839
| 298,365
| 2,085
| 594
| 1
|
04/05/2020
| 557,629
|
### Context
Coronaviruses are a large family of viruses which may cause illness in animals or humans. In humans, several coronaviruses are known to cause respiratory infections ranging from the common cold to more severe diseases such as Middle East Respiratory Syndrome (MERS) and Severe Acute Respiratory Syndrome (SARS). The most recently discovered coronavirus causes coronavirus disease COVID-19 - World Health Organization
The number of new cases are increasing day by day around the world. This dataset has information from the states and union territories of India at daily level.
State level data comes from [Ministry of Health & Family Welfare](https://www.mohfw.gov.in/)
Testing data and vaccination data comes from [covid19india](https://www.covid19india.org/). Huge thanks to them for their efforts!
Update on April 20, 2021: Thanks to the [Team at ISIBang](https://www.isibang.ac.in/~athreya/incovid19/), I was able to get the historical data for the periods that I missed to collect and updated the csv file.
### Content
COVID-19 cases at daily level is present in `covid_19_india.csv` file
Statewise testing details in `StatewiseTestingDetails.csv` file
Travel history dataset by @dheerajmpai - https://www.kaggle.com/dheerajmpai/covidindiatravelhistory
### Acknowledgements
Thanks to Indian [Ministry of Health & Family Welfare](https://www.mohfw.gov.in/) for making the data available to general public.
Thanks to [covid19india.org](http://portal.covid19india.org/) for making the individual level details, testing details, vaccination details available to general public.
Thanks to [Wikipedia](https://en.wikipedia.org/wiki/List_of_states_and_union_territories_of_India_by_population) for population information.
Thanks to the [Team at ISIBang](https://www.isibang.ac.in/~athreya/incovid19/)
Photo Courtesy - https://hgis.uw.edu/virus/
### Inspiration
Looking for data based suggestions to stop / delay the spread of virus
|
COVID-19 in India
|
Dataset on Novel Corona Virus Disease 2019 in India
|
CC0: Public Domain
| 237
| 237
| null | 0
| 0.083946
| 0.038075
| 0.299825
| 0.078107
| 0.124988
| 0.117772
| 51
|
38,761
| 216,167
| 3,308,439
| 3,308,439
| null | 477,177
| 493,143
| 227,259
|
Dataset
|
06/04/2019 02:58:45
|
06/04/2019
| 1,247,947
| 265,768
| 1,501
| 758
| 1
|
07/22/2025
| 216,167
|
### Context
This data set dates from 1988 and consists of four databases: Cleveland, Hungary, Switzerland, and Long Beach V. It contains 76 attributes, including the predicted attribute, but all published experiments refer to using a subset of 14 of them. The "target" field refers to the presence of heart disease in the patient. It is integer valued 0 = no disease and 1 = disease.
### Content
Attribute Information:
> 1. age
> 2. sex
> 3. chest pain type (4 values)
> 4. resting blood pressure
> 5. serum cholestoral in mg/dl
> 6. fasting blood sugar > 120 mg/dl
> 7. resting electrocardiographic results (values 0,1,2)
> 8. maximum heart rate achieved
> 9. exercise induced angina
> 10. oldpeak = ST depression induced by exercise relative to rest
> 11. the slope of the peak exercise ST segment
> 12. number of major vessels (0-3) colored by flourosopy
> 13. thal: 0 = normal; 1 = fixed defect; 2 = reversable defect
The names and social security numbers of the patients were recently removed from the database, replaced with dummy values.
|
Heart Disease Dataset
|
Public Health Dataset
|
Unknown
| 2
| 1
| null | 0
| 0.103228
| 0.02741
| 0.267069
| 0.099671
| 0.124345
| 0.1172
| 52
|
2,343
| 2,477
| 111,640
| 111,640
| null | 4,140
| 4,140
| 6,554
|
Dataset
|
09/13/2017 22:07:02
|
02/06/2018
| 1,128,885
| 242,898
| 2,243
| 833
| 1
|
11/06/2019
| 2,477
|
### Context
This is the sentiment140 dataset. It contains 1,600,000 tweets extracted using the twitter api . The tweets have been annotated (0 = negative, 4 = positive) and they can be used to detect sentiment .
### Content
It contains the following 6 fields:
1. **target**: the polarity of the tweet (*0* = negative, *2* = neutral, *4* = positive)
2. **ids**: The id of the tweet ( *2087*)
3. **date**: the date of the tweet (*Sat May 16 23:58:44 UTC 2009*)
4. **flag**: The query (*lyx*). If there is no query, then this value is NO_QUERY.
5. **user**: the user that tweeted (*robotickilldozr*)
6. **text**: the text of the tweet (*Lyx is cool*)
### Acknowledgements
The official link regarding the dataset with resources about how it was generated is [here][1]
The official paper detailing the approach is [here][2]
Citation: Go, A., Bhayani, R. and Huang, L., 2009. Twitter sentiment classification using distant supervision. *CS224N Project Report, Stanford, 1(2009), p.12*.
### Inspiration
To detect severity from tweets. You [may have a look at this][3].
[1]: http://%20http://help.sentiment140.com/for-students/
[2]: http://bhttp://cs.stanford.edu/people/alecmgo/papers/TwitterDistantSupervision09.pdf
[3]: https://www.linkedin.com/pulse/social-machine-learning-h2o-twitter-python-marios-michailidis
|
Sentiment140 dataset with 1.6 million tweets
|
Sentiment analysis with tweets
|
Other (specified in description)
| 2
| 2
| null | 0
| 0.093379
| 0.04096
| 0.244087
| 0.109533
| 0.12199
| 0.115104
| 53
|
32,694
| 116,573
| 2,603,295
| 2,603,295
| null | 3,551,030
| 3,604,079
| 126,445
|
Dataset
|
02/06/2019 18:20:07
|
02/06/2019
| 160,109
| 101,599
| 371
| 2,714
| 2
|
07/07/2025
| 116,573
|
Dataset for Kaggle's [Data Visualization](https://www.kaggle.com/learn/data-visualization) course
|
Interesting Data to Visualize
|
For Kaggle's Data Visualization Course
|
Unknown
| 2
| 4
| null | 0
| 0.013244
| 0.006775
| 0.102096
| 0.35687
| 0.119746
| 0.113102
| 54
|
1,982
| 54,339
| 67,483
| 67,483
| null | 104,884
| 111,874
| 63,066
|
Dataset
|
09/19/2018 13:42:20
|
09/19/2018
| 1,084,932
| 213,831
| 2,138
| 1,024
| 1
|
11/06/2019
| 54,339
|
# Overview
Another more interesting than digit classification dataset to use to get biology and medicine students more excited about machine learning and image processing.
## Original Data Source
- Original Challenge: https://challenge2018.isic-archive.com
- https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/DBW86T
[1] Noel Codella, Veronica Rotemberg, Philipp Tschandl, M. Emre Celebi, Stephen Dusza, David Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael Marchetti, Harald Kittler, Allan Halpern: โSkin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC)โ, 2018; https://arxiv.org/abs/1902.03368
[2] Tschandl, P., Rosendahl, C. & Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 5, 180161 doi:10.1038/sdata.2018.161 (2018).
## From Authors
Training of neural networks for automated diagnosis of pigmented skin lesions is hampered by the small size and lack of diversity of available dataset of dermatoscopic images. We tackle this problem by releasing the HAM10000 ("Human Against Machine with 10000 training images") dataset. We collected dermatoscopic images from different populations, acquired and stored by different modalities. The final dataset consists of 10015 dermatoscopic images which can serve as a training set for academic machine learning purposes. Cases include a representative collection of all important diagnostic categories in the realm of pigmented lesions: Actinic keratoses and intraepithelial carcinoma / Bowen's disease (akiec), basal cell carcinoma (bcc), benign keratosis-like lesions (solar lentigines / seborrheic keratoses and lichen-planus like keratoses, bkl), dermatofibroma (df), melanoma (mel), melanocytic nevi (nv) and vascular lesions (angiomas, angiokeratomas, pyogenic granulomas and hemorrhage, vasc).
More than 50% of lesions are confirmed through histopathology (histo), the ground truth for the rest of the cases is either follow-up examination (follow_up), expert consensus (consensus), or confirmation by in-vivo confocal microscopy (confocal). The dataset includes lesions with multiple images, which can be tracked by the lesion_id-column within the HAM10000_metadata file.
The test set is not public, but the evaluation server remains running (see the challenge website). Any publications written using the HAM10000 data should be evaluated on the official test set hosted there, so that methods can be fairly compared.
|
Skin Cancer MNIST: HAM10000
|
a large collection of multi-source dermatoscopic images of pigmented lesions
|
CC BY-NC-SA 4.0
| 2
| 2
|
finance, real estate, marketing, ratings and reviews, e-commerce services, travel
| 6
| 0.089744
| 0.039042
| 0.214878
| 0.134648
| 0.119578
| 0.112952
| 55
|
17,028
| 5,857
| 1,437,850
| 1,437,850
| null | 13,208,880
| 13,902,179
| 12,161
|
Dataset
|
12/02/2017 08:34:41
|
02/06/2018
| 1,296,086
| 201,710
| 3,254
| 676
| 1
|
11/06/2019
| 5,857
|
# Fruits-360 dataset: A dataset of images containing fruits, vegetables, nuts and seeds
## Version: 2025.09.29.0
## Content
The following fruits, vegetables and nuts and are included:
Apples (different varieties: Crimson Snow, Golden, Golden-Red, Granny Smith, Pink Lady, Red, Red Delicious), Apricot, Avocado, Avocado ripe, Banana (Yellow, Red, Lady Finger), Beans, Beetroot Red, Blackberry, Blueberry, Cabbage, Caju seed, Cactus fruit, Cantaloupe (2 varieties), Carambula, Carrot, Cauliflower, Cherimoya, Cherry (different varieties, Rainier), Cherry Wax (Yellow, Red, Black), Chestnut, Clementine, Cocos, Corn (with husk), Cucumber (ripened, regular), Dates, Eggplant, Fig, Ginger Root, Goosberry, Granadilla, Grape (Blue, Pink, White (different varieties)), Grapefruit (Pink, White), Guava, Hazelnut, Huckleberry, Kiwi, Kaki, Kohlrabi, Kumsquats, Lemon (normal, Meyer), Lime, Lychee, Mandarine, Mango (Green, Red), Mangostan, Maracuja, Melon Piel de Sapo, Mulberry, Nectarine (Regular, Flat), Nut (Forest, Pecan), Onion (Red, White), Orange, Papaya, Passion fruit, Peach (different varieties), Pepino, Pear (different varieties, Abate, Forelle, Kaiser, Monster, Red, Stone, Williams), Pepper (Red, Green, Orange, Yellow), Physalis (normal, with Husk), Pineapple (normal, Mini), Pistachio, Pitahaya Red, Plum (different varieties), Pomegranate, Pomelo Sweetie, Potato (Red, Sweet, White), Quince, Rambutan, Raspberry, Redcurrant, Salak, Strawberry (normal, Wedge), Tamarillo, Tangelo, Tomato (different varieties, Maroon, Cherry Red, Yellow, not ripened, Heart), Walnut, Watermelon, Zucchini (green and dark).
## Branches
The dataset has 5 major branches:
-The _100x100_ branch, where all images have 100x100 pixels. See _fruits-360_100x100_ folder.
-The _original-size_ branch, where all images are at their original (captured) size. See _fruits-360_original-size_ folder.
-The _meta_ branch, which contains additional information about the objects in the Fruits-360 dataset. See _fruits-360_dataset_meta_ folder.
-The _multi_ branch, which contains images with multiple fruits, vegetables, nuts and seeds. These images are not labeled. See _fruits-360_multi_ folder.
-The _3_body_problem_ branch where the Training and Test folders contain different (varieties of) the 3 fruits and vegetables (Apples, Cherries and Tomatoes). See _fruits-360_3-body-problem_ folder.
## How to cite
[Mihai Oltean](https://mihaioltean.github.io), Fruits-360 dataset, 2017-
## Dataset properties
### For the _100x100_ branch
Total number of images: 150804.
Training set size: 113083 images.
Test set size: 37721 images.
Number of classes: 219 (fruits, vegetables, nuts and seeds).
Image size: 100x100 pixels.
### For the _original-size_ branch
Total number of images: 69227.
Training set size: 35283 images.
Validation set size: 17643 images
Test set size: 17537 images.
Number of classes: 104 (fruits, vegetables, nuts and seeds).
Image size: various (original, captured, size) pixels.
### For the _3-body-problem_ branch
Total number of images: 47033.
Training set size: 34800 images.
Test set size: 12233 images.
Number of classes: 3 (Apples, Cherries, Tomatoes).
Number of varieties: Apples = 29; Cherries = 12; Tomatoes = 19.
Image size: 100x100 pixels.
### For the _meta_ branch
Number of classes: 26 (fruits, vegetables, nuts and seeds).
### For the _multi_ branch
Number of images: 150.
## Filename format:
### For the _100x100_ branch
image_index_100.jpg (e.g. 31_100.jpg) or
r_image_index_100.jpg (e.g. r_31_100.jpg) or
r?_image_index_100.jpg (e.g. r2_31_100.jpg)
where "r" stands for rotated fruit. "r2" means that the fruit was rotated around the 3rd axis. "100" comes from image size (100x100 pixels).
Different varieties of the same fruit (apple, for instance) are stored as belonging to different classes.
### For the _original-size_ branch
r?_image_index.jpg (e.g. r2_31.jpg)
where "r" stands for rotated fruit. "r2" means that the fruit was rotated around the 3rd axis.
The name of the image files in the new version does NOT contain the "\_100" suffix anymore. This will help you to make the distinction between the _original-size_ branch and the _100x100_ branch.
### For the _multi_ branch
The file's name is the concatenation of the names of the fruits inside that picture.
## Alternate download
The Fruits-360 dataset can be downloaded from:
**Kaggle** [https://www.kaggle.com/moltean/fruits](https://www.kaggle.com/moltean/fruits)
**GitHub** [https://github.com/fruits-360](https://github.com/fruits-360)
## How fruits were filmed
Fruits and vegetables were planted in the shaft of a low-speed motor (3 rpm) and a short movie of 20 seconds was recorded.
A Logitech C920 camera was used for filming the fruits. This is one of the best webcams available.
Behind the fruits, we placed a white sheet of paper as a background.
Here is a movie showing how the fruits and vegetables are filmed: https://youtu.be/_HFKJ144JuU
### How fruits were extracted from the background
However, due to the variations in the lighting conditions, the background was not uniform and we wrote a dedicated algorithm that extracts the fruit from the background. This algorithm is of flood fill type: we start from each edge of the image and we mark all pixels there, then we mark all pixels found in the neighborhood of the already marked pixels for which the distance between colors is less than a prescribed value. We repeat the previous step until no more pixels can be marked.
All marked pixels are considered as being background (which is then filled with white) and the rest of the pixels are considered as belonging to the object.
The maximum value for the distance between 2 neighbor pixels is a parameter of the algorithm and is set (by trial and error) for each movie.
Pictures from the test-multiple_fruits folder were taken with a Nexus 5X phone or an iPhone 11.
## History ###
Fruits were filmed at the dates given below (YYYY.MM.DD):
2017.02.25 - First fruit filmed (Apple golden).
2023.12.30 - Official Github repository is now [https://github.com/fruits-360](https://github.com/fruits-360)
2025.04.21 - Fruits-360-3-body-problem uploaded.
## License
CC BY-SA 4.0
Copyright (c) 2017-, [Mihai Oltean](https://mihaioltean.github.io)
You are free to:
*Share* โ copy and redistribute the material in any medium or format for any purpose, even commercially.
*Adapt* โ remix, transform, and build upon the material for any purpose, even commercially.
Under the following terms:
*Attribution* โ You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
*ShareAlike* โ If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
|
Fruits-360 dataset
|
A dataset with 150804 images of 219 fruits, vegetables, nuts and seeds
|
CC BY-SA 4.0
| 55
| 116
| null | 0
| 0.10721
| 0.059422
| 0.202697
| 0.088889
| 0.114554
| 0.108455
| 56
|
8,660
| 727,551
| 793,761
| 793,761
| null | 1,263,738
| 1,295,676
| 742,394
|
Dataset
|
06/20/2020 01:03:20
|
06/20/2020
| 1,111,531
| 170,287
| 2,423
| 1,104
| 1
|
08/22/2020
| 727,551
|
# About this dataset
> Cardiovascular diseases (CVDs) are the **number 1 cause of death globally**, taking an estimated **17.9 million lives each year**, which accounts for **31% of all deaths worlwide**.
Heart failure is a common event caused by CVDs and this dataset contains 12 features that can be used to predict mortality by heart failure.
> Most cardiovascular diseases can be prevented by addressing behavioural risk factors such as tobacco use, unhealthy diet and obesity, physical inactivity and harmful use of alcohol using population-wide strategies.
> People with cardiovascular disease or who are at high cardiovascular risk (due to the presence of one or more risk factors such as hypertension, diabetes, hyperlipidaemia or already established disease) need **early detection** and management wherein a machine learning model can be of great help.
# How to use this dataset
> - Create a model for predicting mortality caused by Heart Failure.
- Your kernel can be featured here!
- [More datasets](https://www.kaggle.com/andrewmvd/datasets)
# Acknowledgements
If you use this dataset in your research, please credit the authors
> ### Citation
Davide Chicco, Giuseppe Jurman: Machine learning can predict survival of patients with heart failure from serum creatinine and ejection fraction alone. BMC Medical Informatics and Decision Making 20, 16 (2020). ([link](https://doi.org/10.1186/s12911-020-1023-5))
> ### License
CC BY 4.0
> ### Splash icon
Icon by [Freepik](https://www.flaticon.com/authors/freepik), available on [Flaticon](https://www.flaticon.com/free-icon/heart_1186541).
> ### Splash banner
Wallpaper by [jcomp](https://br.freepik.com/jcomp), available on [Freepik](https://br.freepik.com/fotos-gratis/simplesmente-design-minimalista-com-estetoscopio-de-equipamento-de-medicina-ou-phonendoscope_5018002.htm#page=1&query=cardiology&position=3).
|
Heart Failure Prediction
|
12 clinical features por predicting death events.
|
Attribution 4.0 International (CC BY 4.0)
| 1
| 1
| null | 0
| 0.091944
| 0.044247
| 0.171121
| 0.145168
| 0.11312
| 0.107167
| 57
|
6,048
| 4,104
| 548,353
| 548,353
| null | 16,930
| 16,930
| 9,656
|
Dataset
|
11/06/2017 16:30:32
|
02/06/2018
| 521,930
| 91,355
| 1,630
| 2,179
| 1
|
11/06/2019
| 4,104
|
### Context
I'm a crowdfunding enthusiast and i'm watching kickstarter since its early days. Right now I just collect data and the only app i've made is this twitter bot which tweet any project reaching some milestone: @bloomwatcher . I have a lot of other ideas, but sadly not enough time to develop them... But I hope you can!
### Content
You'll find most useful data for project analysis. Columns are self explanatory except:
- usd_pledged: conversion in US dollars of the pledged column (conversion done by kickstarter).
- usd pledge real: conversion in US dollars of the pledged column (conversion from [Fixer.io API][1]).
- usd goal real: conversion in US dollars of the goal column (conversion from [Fixer.io API][1]).
### Acknowledgements
Data are collected from [Kickstarter Platform][2]
usd conversion (usd_pledged_real and usd_goal_real columns) were generated from [convert ks pledges to usd][3] script done by [tonyplaysguitar][4]
### Inspiration
I hope to see great projects, and why not a model to predict if a project will be successful before it is released? :)
[1]: http://Fixer.io
[2]: https://www.kickstarter.com/
[3]: https://www.kaggle.com/tonyplaysguitar/convert-ks-pledges-to-usd/
[4]: https://www.kaggle.com/tonyplaysguitar
|
Kickstarter Projects
|
More than 300,000 kickstarter projects
|
CC BY-NC-SA 4.0
| 7
| 7
| null | 0
| 0.043173
| 0.029766
| 0.091802
| 0.286522
| 0.112816
| 0.106894
| 58
|
4,550
| 1,985
| 407,457
| 407,457
| null | 3,404
| 3,404
| 5,692
|
Dataset
|
08/17/2017 02:44:30
|
02/06/2018
| 1,336,284
| 234,872
| 1,945
| 442
| 1
|
01/31/2020
| 1,985
|
### Context
Typically e-commerce datasets are proprietary and consequently hard to find among publicly available data. However, [The UCI Machine Learning Repository][1] has made this dataset containing actual transactions from 2010 and 2011. The dataset is maintained on their site, where it can be found by the title "Online Retail".
### Content
"This is a transnational data set which contains all the transactions occurring between 01/12/2010 and 09/12/2011 for a UK-based and registered non-store online retail.The company mainly sells unique all-occasion gifts. Many customers of the company are wholesalers."
### Acknowledgements
Per the UCI Machine Learning Repository, this data was made available by Dr Daqing Chen, Director: Public Analytics group. chend '@' lsbu.ac.uk, School of Engineering, London South Bank University, London SE1 0AA, UK.
Image from stocksnap.io.
### Inspiration
Analyses for this dataset could include time series, clustering, classification and more.
[1]: http://archive.ics.uci.edu/ml/index.php
|
E-Commerce Data
|
Actual transactions from UK retailer
|
Unknown
| 1
| 1
| null | 0
| 0.110535
| 0.035518
| 0.236022
| 0.05812
| 0.110049
| 0.104404
| 59
|
7,331
| 121
| 693,375
| 693,375
| null | 280
| 280
| 1,440
|
Dataset
|
08/22/2016 22:44:53
|
02/06/2018
| 844,510
| 147,515
| 2,681
| 1,314
| 1
|
11/06/2019
| 121
|
This data set includes 721 Pokemon, including their number, name, first and second type, and basic stats: HP, Attack, Defense, Special Attack, Special Defense, and Speed. It has been of great use when teaching statistics to kids. With certain types you can also give a geeky introduction to machine learning.
This are the raw attributes that are used for calculating how much damage an attack will do in the games. This dataset is about the pokemon games (*NOT* pokemon cards or Pokemon Go).
The data as described by [Myles O'Neill](https://www.kaggle.com/mylesoneill) is:
- **#**: ID for each pokemon
- **Name**: Name of each pokemon
- **Type 1**: Each pokemon has a type, this determines weakness/resistance to attacks
- **Type 2**: Some pokemon are dual type and have 2
- **Total**: sum of all stats that come after this, a general guide to how strong a pokemon is
- **HP**: hit points, or health, defines how much damage a pokemon can withstand before fainting
- **Attack**: the base modifier for normal attacks (eg. Scratch, Punch)
- **Defense**: the base damage resistance against normal attacks
- **SP Atk**: special attack, the base modifier for special attacks (e.g. fire blast, bubble beam)
- **SP Def**: the base damage resistance against special attacks
- **Speed**: determines which pokemon attacks first each round
The data for this table has been acquired from several different sites, including:
- [pokemon.com](http://www.pokemon.com/us/pokedex/)
- [pokemondb](http://pokemondb.net/pokedex)
- [bulbapedia](http://bulbapedia.bulbagarden.net/wiki/List_of_Pok%C3%A9mon_by_National_Pok%C3%A9dex_number)
One question has been answered with this database: The type of a pokemon cannot be inferred only by it's Attack and Deffence. It would be worthy to find which two variables can define the type of a pokemon, if any. Two variables can be plotted in a 2D space, and used as an example for machine learning. This could mean the creation of a visual example any geeky Machine Learning class would love.
|
Pokemon with stats
|
721 Pokemon with stats and types
|
CC0: Public Domain
| 2
| 2
| null | 0
| 0.069856
| 0.048958
| 0.148237
| 0.172781
| 0.109958
| 0.104322
| 60
|
94,042
| 1,546,318
| 5,270,083
| 5,270,083
| null | 2,549,419
| 2,592,425
| 1,566,216
|
Dataset
|
08/22/2021 18:15:05
|
08/22/2021
| 1,062,787
| 212,570
| 2,820
| 636
| 1
|
09/23/2021
| 1,546,318
|
### Context
**Problem Statement**
Customer Personality Analysis is a detailed analysis of a companyโs ideal customers. It helps a business to better understand its customers and makes it easier for them to modify products according to the specific needs, behaviors and concerns of different types of customers.
Customer personality analysis helps a business to modify its product based on its target customers from different types of customer segments. For example, instead of spending money to market a new product to every customer in the companyโs database, a company can analyze which customer segment is most likely to buy the product and then market the product only on that particular segment.
### Content
**Attributes**
**People**
* ID: Customer's unique identifier
* Year_Birth: Customer's birth year
* Education: Customer's education level
* Marital_Status: Customer's marital status
* Income: Customer's yearly household income
* Kidhome: Number of children in customer's household
* Teenhome: Number of teenagers in customer's household
* Dt_Customer: Date of customer's enrollment with the company
* Recency: Number of days since customer's last purchase
* Complain: 1 if the customer complained in the last 2 years, 0 otherwise
**Products**
* MntWines: Amount spent on wine in last 2 years
* MntFruits: Amount spent on fruits in last 2 years
* MntMeatProducts: Amount spent on meat in last 2 years
* MntFishProducts: Amount spent on fish in last 2 years
* MntSweetProducts: Amount spent on sweets in last 2 years
* MntGoldProds: Amount spent on gold in last 2 years
**Promotion**
* NumDealsPurchases: Number of purchases made with a discount
* AcceptedCmp1: 1 if customer accepted the offer in the 1st campaign, 0 otherwise
* AcceptedCmp2: 1 if customer accepted the offer in the 2nd campaign, 0 otherwise
* AcceptedCmp3: 1 if customer accepted the offer in the 3rd campaign, 0 otherwise
* AcceptedCmp4: 1 if customer accepted the offer in the 4th campaign, 0 otherwise
* AcceptedCmp5: 1 if customer accepted the offer in the 5th campaign, 0 otherwise
* Response: 1 if customer accepted the offer in the last campaign, 0 otherwise
**Place**
* NumWebPurchases: Number of purchases made through the companyโs website
* NumCatalogPurchases: Number of purchases made using a catalogue
* NumStorePurchases: Number of purchases made directly in stores
* NumWebVisitsMonth: Number of visits to companyโs website in the last month
### Target
Need to perform clustering to summarize customer segments.
### Acknowledgement
The dataset for this project is provided by Dr. Omar Romero-Hernandez.
### Solution
You can take help from following link to know more about the approach to solve this problem.
[Visit this URL ](https://thecleverprogrammer.com/2021/02/08/customer-personality-analysis-with-python/)
### Inspiration
happy learning....
**Hope you like this dataset please don't forget to like this dataset**
|
Customer Personality Analysis
|
Analysis of company's ideal customers
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.087912
| 0.051497
| 0.21361
| 0.083629
| 0.109162
| 0.103605
| 61
|
29,934
| 111,880
| 2,307,235
| 2,307,235
| null | 269,359
| 281,586
| 121,691
|
Dataset
|
01/29/2019 10:37:42
|
01/29/2019
| 605,582
| 162,296
| 1,596
| 1,417
| 1
|
05/26/2020
| 111,880
|
### Context
This is image data of Natural Scenes around the world.
### Content
This Data contains around 25k images of size 150x150 distributed under 6 categories.
{'buildings' -> 0,
'forest' -> 1,
'glacier' -> 2,
'mountain' -> 3,
'sea' -> 4,
'street' -> 5 }
The Train, Test and Prediction data is separated in each zip files. There are around 14k images in Train, 3k in Test and 7k in Prediction.
This data was initially published on https://datahack.analyticsvidhya.com by Intel to host a Image classification Challenge.
### Acknowledgements
Thanks to https://datahack.analyticsvidhya.com for the challenge and Intel for the Data
Photo by [Jan Bรถttinger on Unsplash][1]
### Inspiration
Want to build powerful Neural network that can classify these images with more accuracy.
[1]: https://unsplash.com/photos/27xFENkt-lc
|
Intel Image Classification
|
Image Scene Classification of Multiclass
|
Data files ยฉ Original Authors
| 2
| 1
| null | 0
| 0.050093
| 0.029145
| 0.16309
| 0.186325
| 0.107163
| 0.101801
| 62
|
37,888
| 786,787
| 3,187,350
| 3,187,350
| null | 1,351,797
| 1,384,195
| 801,807
|
Dataset
|
07/19/2020 12:24:26
|
07/19/2020
| 866,023
| 232,165
| 1,625
| 699
| 1
|
07/22/2025
| 786,787
|
The data consists of 48x48 pixel grayscale images of faces. The faces have been automatically registered so that the face is more or less centred and occupies about the same amount of space in each image.
The task is to categorize each face based on the emotion shown in the facial expression into one of seven categories (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral). The training set consists of 28,709 examples and the public test set consists of 3,589 examples.
|
FER-2013
|
Learn facial expressions from an image
|
Database: Open Database, Contents: Database Contents
| 1
| 1
| null | 0
| 0.071636
| 0.029674
| 0.233301
| 0.091913
| 0.106631
| 0.10132
| 63
|
48,851
| 511,638
| 4,476,084
| 4,476,084
| null | 944,030
| 971,732
| 524,873
|
Dataset
|
02/13/2020 01:27:20
|
02/13/2020
| 1,317,320
| 188,100
| 2,593
| 597
| 1
|
04/06/2020
| 511,638
|
### Context
Have you ever wondered when the best time of year to book a hotel room is? Or the optimal length of stay in order to get the best daily rate? What if you wanted to predict whether or not a hotel was likely to receive a disproportionately high number of special requests?
This hotel booking dataset can help you explore those questions!
### Content
This data set contains booking information for a city hotel and a resort hotel, and includes information such as when the booking was made, length of stay, the number of adults, children, and/or babies, and the number of available parking spaces, among other things.
All personally identifying information has been removed from the data.
### Acknowledgements
The data is originally from the article [**Hotel Booking Demand Datasets**](https://www.sciencedirect.com/science/article/pii/S2352340918315191), written by Nuno Antonio, Ana Almeida, and Luis Nunes for Data in Brief, Volume 22, February 2019.
The data was downloaded and cleaned by Thomas Mock and Antoine Bichat for [#TidyTuesday during the week of February 11th, 2020](https://github.com/rfordatascience/tidytuesday/blob/master/data/2020/2020-02-11/readme.md).
### Inspiration
This data set is ideal for anyone looking to practice their exploratory data analysis (EDA) or get started in building predictive models!
If you're looking for inspiration on data visualizations, check out the [#TidyTuesday program](https://github.com/rfordatascience/tidytuesday), a free, weekly online event that encourages participants to create and share their [code and visualizations for a given data set on Twitter](https://twitter.com/search?q=%23TidyTuesday&src=typed_query).
If you'd like to dive into predictive modeling, [Julia Silge](https://twitter.com/juliasilge) has an [accessible and fantastic walk-through](https://juliasilge.com/blog/hotels-recipes/) which highlights the [`tidymodels`](https://www.tidyverse.org/blog/2018/08/tidymodels-0-0-1/) R package.
|
Hotel booking demand
|
From the paper: hotel booking demand datasets
|
Attribution 4.0 International (CC BY 4.0)
| 1
| 1
| null | 0
| 0.108966
| 0.047351
| 0.189021
| 0.078501
| 0.10596
| 0.100714
| 64
|
22,699
| 29,561
| 1,772,071
| 1,772,071
| null | 37,705
| 39,327
| 37,836
|
Dataset
|
06/01/2018 17:19:08
|
06/01/2018
| 839,568
| 204,474
| 1,768
| 718
| 1
|
11/06/2019
| 29,561
|
### Context
A popular component of computer vision and deep learning revolves around identifying faces for various applications from logging into your phone with your face or searching through surveillance images for a particular suspect. This dataset is great for training and testing models for face detection, particularly for recognising facial attributes such as finding people with brown hair, are smiling, or wearing glasses. Images cover large pose variations, background clutter, diverse people, supported by a large quantity of images and rich annotations. This data was originally collected by researchers at MMLAB, The Chinese University of Hong Kong (specific reference in Acknowledgment section).
### Content
**Overall**
- 202,599 number of face images of various celebrities
- 10,177 unique identities, but names of identities are not given
- 40 binary attribute annotations per image
- 5 landmark locations
**Data Files**
- **img_align_celeba.zip**: All the face images, cropped and aligned
- **list_eval_partition.csv**: Recommended partitioning of images into training, validation, testing sets. Images 1-162770 are training, 162771-182637 are validation, 182638-202599 are testing
- **list_bbox_celeba.csv**: Bounding box information for each image. "x_1" and "y_1" represent the upper left point coordinate of bounding box. "width" and "height" represent the width and height of bounding box
- **list_landmarks_align_celeba.csv**: Image landmarks and their respective coordinates. There are 5 landmarks: left eye, right eye, nose, left mouth, right mouth
- **list_attr_celeba.csv**: Attribute labels for each image. There are 40 attributes. "1" represents positive while "-1" represents negative
### Acknowledgements
Original data and banner image source came from http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
As mentioned on the website, the CelebA dataset is **available for non-commercial research purposes only**. For specifics please refer to the website.
The creators of this dataset wrote the following paper employing CelebA for face detection:
*S. Yang, P. Luo, C. C. Loy, and X. Tang, "From Facial Parts Responses to Face Detection: A Deep Learning Approach", in IEEE International Conference on Computer Vision (ICCV), 2015*
### Inspiration
- Can you train a model that can detect particular facial attributes?
- Which images contain people that are smiling?
- Does someone have straight or wavy hair?
|
CelebFaces Attributes (CelebA) Dataset
|
Over 200k images of celebrities with 40 binary attribute annotations
|
Other (specified in description)
| 1
| 2
| null | 0
| 0.069448
| 0.032286
| 0.205475
| 0.094412
| 0.100405
| 0.095678
| 65
|
20
| 29
| 553,690
| null | 667
| 2,150
| 2,150
| 1,119
|
Dataset
|
03/10/2016 20:18:01
|
02/06/2018
| 1,200,754
| 168,604
| 2,407
| 673
| 1
|
11/06/2019
| 29
|
Some say climate change is the biggest threat of our age while others say itโs a myth based on dodgy science. We are turning some of the data over to you so you can form your own view.
[](https://www.kaggle.com/jagelves/d/berkeleyearth/climate-change-earth-surface-temperature-data/continental-us-climate-change-1850-2013)
Even more than with other data sets that Kaggle has featured, thereโs a huge amount of data cleaning and preparation that goes into putting together a long-time study of climate trends. Early data was collected by technicians using mercury thermometers, where any variation in the visit time impacted measurements. In the 1940s, the construction of airports caused many weather stations to be moved. In the 1980s, there was a move to electronic thermometers that are said to have a cooling bias.
Given this complexity, there are a range of organizations that collate climate trends data. The three most cited land and ocean temperature data sets are NOAAโs MLOST, NASAโs GISTEMP and the UKโs HadCrut.
We have repackaged the data from a newer compilation put together by the [Berkeley Earth][1], which is affiliated with Lawrence Berkeley National Laboratory. The Berkeley Earth Surface Temperature Study combines 1.6 billion temperature reports from 16 pre-existing archives. It is nicely packaged and allows for slicing into interesting subsets (for example by country). They publish the source data and the code for the transformations they applied. They also use methods that allow weather observations from shorter time series to be included, meaning fewer observations need to be thrown away.
In this dataset, we have include several files:
Global Land and Ocean-and-Land Temperatures (**GlobalTemperatures.csv**):
- Date: starts in 1750 for average land temperature and 1850 for max and min land temperatures and global ocean and land temperatures
- LandAverageTemperature: global average land temperature in celsius
- LandAverageTemperatureUncertainty: the 95% confidence interval around the average
- LandMaxTemperature: global average maximum land temperature in celsius
- LandMaxTemperatureUncertainty: the 95% confidence interval around the maximum land temperature
- LandMinTemperature: global average minimum land temperature in celsius
- LandMinTemperatureUncertainty: the 95% confidence interval around the minimum land temperature
- LandAndOceanAverageTemperature: global average land and ocean temperature in celsius
- LandAndOceanAverageTemperatureUncertainty: the 95% confidence interval around the global average land and ocean temperature
Other files include:
- Global Average Land Temperature by Country (**GlobalLandTemperaturesByCountry.csv**)
- Global Average Land Temperature by State (**GlobalLandTemperaturesByState.csv**)
- Global Land Temperatures By Major City (**GlobalLandTemperaturesByMajorCity.csv**)
- Global Land Temperatures By City (**GlobalLandTemperaturesByCity.csv**)
The raw data comes from the [Berkeley Earth data page][2].
[1]: http://berkeleyearth.org/about/
[2]: http://berkeleyearth.org/data/
|
Climate Change: Earth Surface Temperature Data
|
Exploring global temperatures since 1750
|
CC BY-NC-SA 4.0
| 2
| 2
| null | 0
| 0.099324
| 0.043955
| 0.169429
| 0.088494
| 0.100301
| 0.095583
| 66
|
24,990
| 31,029
| 1,966,677
| 1,966,677
| null | 40,943
| 43,113
| 39,331
|
Dataset
|
06/11/2018 12:42:08
|
06/11/2018
| 944,528
| 216,485
| 2,377
| 401
| 1
|
11/06/2019
| 31,029
|
### Context
This is a historical dataset on the modern Olympic Games, including all the Games from Athens 1896 to Rio 2016. I scraped this data from www.sports-reference.com in May 2018. The R code I used to [scrape](https://github.com/rgriff23/Olympic_history/blob/master/R/olympics%20scrape.R) and [wrangle](https://github.com/rgriff23/Olympic_history/blob/master/R/olympics%20wrangle.R) the data is on GitHub. I recommend checking [my kernel](https://www.kaggle.com/heesoo37/olympic-history-data-a-thorough-analysis) before starting your own analysis.
Note that the Winter and Summer Games were held in the same year up until 1992. After that, they staggered them such that Winter Games occur on a four year cycle starting with 1994, then Summer in 1996, then Winter in 1998, and so on. A common mistake people make when analyzing this data is to assume that the Summer and Winter Games have always been staggered.
### Content
The file athlete_events.csv contains 271116 rows and 15 columns. Each row corresponds to an individual athlete competing in an individual Olympic event (athlete-events). The columns are:
1. **ID** - Unique number for each athlete
2. **Name** - Athlete's name
3. **Sex** - M or F
4. **Age** - Integer
5. **Height** - In centimeters
6. **Weight** - In kilograms
7. **Team** - Team name
8. **NOC** - National Olympic Committee 3-letter code
9. **Games** - Year and season
10. **Year** - Integer
11. **Season** - Summer or Winter
12. **City** - Host city
13. **Sport** - Sport
14. **Event** - Event
15. **Medal** - Gold, Silver, Bronze, or NA
### Acknowledgements
The Olympic data on www.sports-reference.com is the result of an incredible amount of research by a group of Olympic history enthusiasts and self-proclaimed 'statistorians'. Check out their [blog](http://olympstats.com/) for more information. All I did was consolidated their decades of work into a convenient format for data analysis.
### Inspiration
This dataset provides an opportunity to ask questions about how the Olympics have evolved over time, including questions about the participation and performance of women, different nations, and different sports and events.
|
120 years of Olympic history: athletes and results
|
basic bio data on athletes and medal results from Athens 1896 to Rio 2016
|
CC0: Public Domain
| 2
| 2
| null | 0
| 0.07813
| 0.043407
| 0.217545
| 0.052728
| 0.097952
| 0.093447
| 67
|
25,012
| 33,080
| 1,969,040
| 1,976,091
| null | 4,852,390
| 4,919,004
| 41,452
|
Dataset
|
06/24/2018 19:03:59
|
06/24/2018
| 880,542
| 201,414
| 1,150
| 711
| 1
|
07/20/2024
| 33,080
|
This dataset contains information about used cars.
This data can be used for a lot of purposes such as price prediction to exemplify the use of linear regression in Machine Learning.
**The columns in the given dataset are as follows**:
1. name
2. year
3. selling_price
5. km_driven
6. fuel
7. seller_type
8. transmission
9. Owner
#### **For** **used** **motorcycle** **datasets** **please** **go** **to** *https://www.kaggle.com/nehalbirla/motorcycle-dataset*
|
Vehicle dataset
|
Used Cars data form websites
|
Database: Open Database, Contents: Database Contents
| 4
| 4
| null | 0
| 0.072837
| 0.021
| 0.2024
| 0.093491
| 0.097432
| 0.092973
| 68
|
3,121
| 435
| 213,782
| 213,782
| null | 896
| 896
| 2,039
|
Dataset
|
11/24/2016 01:32:33
|
02/06/2018
| 1,209,177
| 242,477
| 1,134
| 124
| 2
|
07/22/2025
| 435
|
Sample Sales Data, Order Info, Sales, Customer, Shipping, etc., Used for Segmentation, Customer Analytics, Clustering and More. Inspired for retail analytics. This was originally used for Pentaho DI Kettle, But I found the set could be useful for Sales Simulation training.
Originally Written by Marรญa Carina Roldรกn, Pentaho Community Member, BI consultant (Assert Solutions), Argentina. This work is licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License. Modified by Gus Segura June 2014.
|
Sample Sales Data
|
Denormalize Sales Data : Segmentation, Clustering, Shipping, etc.
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.100021
| 0.020708
| 0.243664
| 0.016305
| 0.095174
| 0.090914
| 69
|
33,499
| 826,163
| 2,681,031
| 2,681,031
| null | 2,879,186
| 2,926,173
| 841,293
|
Dataset
|
08/11/2020 14:08:36
|
08/11/2020
| 625,470
| 199,175
| 1,572
| 666
| 1
|
02/01/2022
| 826,163
|

### Context
I took the titanic test file and the gender_submission and put them together in excel to make a csv. This is great for making charts to help you visualize. This also will help you know who died or survived. At least 70% right, but its up to you to make it 100% Thanks to the titanic beginners competitions for providing with the data. Please **Upvote** my dataset, it will mean a lot to me. Thank you!
|
Titanic dataset
|
Gender submission and test file merged
|
CC0: Public Domain
| 6
| 1
| null | 0
| 0.051738
| 0.028707
| 0.20015
| 0.087574
| 0.092042
| 0.088049
| 70
|
9,633
| 6,012
| 860,951
| 860,951
| null | 1,733,506
| 1,770,518
| 12,369
|
Dataset
|
12/04/2017 11:01:24
|
01/02/2018
| 831,953
| 162,302
| 1,840
| 773
| 1
|
06/26/2020
| 6,012
|
### Context
Ever wondered if you should carry an umbrella tomorrow? With this dataset, you can predict **next-day rain** by training classification models on the target variable **RainTomorrow**.
### Content
This dataset comprises about 10 years of daily weather observations from numerous locations across Australia.
**RainTomorrow is the target variable to predict. It answers the crucial question: will it rain the next day? (Yes or No).**
- This column is marked 'Yes' if the rain for that day was 1mm or more.
### Source & Acknowledgements
The observations were gathered from a multitude of weather stations. You can access daily observations from http://www.bom.gov.au/climate/data.
For example, you can check the latest weather observations in Canberra here: [Canberra Weather](http://www.bom.gov.au/climate/dwo/IDCJDW2801.latest.shtml).
Definitions have been adapted from the Bureau of Meteorology's [Climate Data Online](http://www.bom.gov.au/climate/dwo/IDCJDW0000.shtml).
Data source: [Climate Data](http://www.bom.gov.au/climate/dwo/) and [Climate Data Online](http://www.bom.gov.au/climate/data).
Copyright Commonwealth of Australia 2010, Bureau of Meteorology.
|
Rain in Australia
|
Predict next-day rain in Australia
|
Other (specified in description)
| 2
| 3
| null | 0
| 0.068818
| 0.033601
| 0.163096
| 0.101644
| 0.09179
| 0.087818
| 71
|
6,911
| 23,777
| 650,515
| 650,515
| null | 30,378
| 30,535
| 31,831
|
Dataset
|
04/26/2018 10:56:50
|
04/26/2018
| 519,156
| 173,930
| 1,441
| 904
| 1
|
08/15/2020
| 23,777
|
This dataset is for running the code from this site: https://becominghuman.ai/building-an-image-classifier-using-deep-learning-in-python-totally-from-a-beginners-perspective-be8dbaf22dd8.
This is how to show a picture from the training set: display(Image('../input/cat-and-dog/training_set/training_set/dogs/dog.423.jpg'))
From the test set: display(Image('../input/cat-and-dog/test_set/test_set/cats/cat.4453.jpg'))
See an example of using this dataset. https://www.kaggle.com/tongpython/nattawut-5920421014-cat-vs-dog-dl
|
Cat and Dog
|
Cats and Dogs dataset to train a DL model
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.042944
| 0.026314
| 0.174781
| 0.118869
| 0.090727
| 0.086845
| 72
|
15,213
| 14,872
| 1,303,085
| 1,303,085
| null | 228,180
| 239,901
| 22,467
|
Dataset
|
03/04/2018 03:23:24
|
03/04/2018
| 711,799
| 121,922
| 1,917
| 1,101
| 1
|
11/06/2019
| 14,872
|
### Context
This dataset is created for prediction of Graduate Admissions from an Indian perspective.
### Content
The dataset contains several parameters which are considered important during the application for Masters Programs.
The parameters included are :
1. GRE Scores ( out of 340 )
2. TOEFL Scores ( out of 120 )
3. University Rating ( out of 5 )
4. Statement of Purpose and Letter of Recommendation Strength ( out of 5 )
5. Undergraduate GPA ( out of 10 )
6. Research Experience ( either 0 or 1 )
7. Chance of Admit ( ranging from 0 to 1 )
### Acknowledgements
This dataset is inspired by the UCLA Graduate Dataset. The test scores and GPA are in the older format.
The dataset is owned by Mohan S Acharya.
### Inspiration
This dataset was built with the purpose of helping students in shortlisting universities with their profiles. The predicted output gives them a fair idea about their chances for a particular university.
### Citation
Please cite the following if you are interested in using the dataset :
**Mohan S Acharya, Asfia Armaan, Aneeta S Antony : A Comparison of Regression Models for Prediction of Graduate Admissions, IEEE International Conference on Computational Intelligence in Data Science 2019**
I would like to thank all of you for contributing to this dataset through discussions and questions. I am in awe of the number of kernels built on this dataset. Some results and visualisations are fantastic and makes me a proud owner of the dataset. Keep em' coming! Thank You.
|
Graduate Admission 2
|
Predicting admission from important parameters
|
CC0: Public Domain
| 2
| 2
|
lstm, dnn
| 2
| 0.058879
| 0.035007
| 0.122519
| 0.144773
| 0.090294
| 0.086448
| 73
|
15,952
| 27,352
| 1,352,634
| 1,352,634
| null | 34,877
| 35,935
| 35,558
|
Dataset
|
05/19/2018 00:53:44
|
05/19/2018
| 629,862
| 187,090
| 901
| 775
| 1
|
07/22/2025
| 27,352
|
# The MNIST dataset provided in a easy-to-use CSV format
The [original dataset](http://yann.lecun.com/exdb/mnist/) is in a format that is difficult for beginners to use. This dataset uses the work of [Joseph Redmon](https://pjreddie.com/) to provide the [MNIST dataset in a CSV format](https://pjreddie.com/projects/mnist-in-csv/).
The dataset consists of two files:
1. `mnist_train.csv`
2. `mnist_test.csv`
The `mnist_train.csv` file contains the 60,000 training examples and labels. The `mnist_test.csv` contains 10,000 test examples and labels. Each row consists of 785 values: the first value is the label (a number from 0 to 9) and the remaining 784 values are the pixel values (a number from 0 to 255).
|
MNIST in CSV
|
The MNIST dataset provided in a easy-to-use CSV format
|
CC0: Public Domain
| 1
| 2
| null | 0
| 0.052101
| 0.016453
| 0.188006
| 0.101907
| 0.089617
| 0.085826
| 74
|
6,084
| 251
| 549,269
| null | 7
| 561
| 561
| 1,725
|
Dataset
|
10/19/2016 10:05:38
|
02/06/2018
| 1,098,092
| 166,008
| 2,055
| 456
| 1
|
11/06/2019
| 251
|
<h2>Context:</h2>
The data were obtained in a survey of students math and portuguese language courses in secondary school. It contains a lot of interesting social, gender and study information about students. You can use it for some EDA or try to predict students final grade.
<h2>Content:</h2>
Attributes for both student-mat.csv (Math course) and student-por.csv (Portuguese language course) datasets:
1. school - student's school (binary: 'GP' - Gabriel Pereira or 'MS' - Mousinho da Silveira)
2. sex - student's sex (binary: 'F' - female or 'M' - male)
3. age - student's age (numeric: from 15 to 22)
4. address - student's home address type (binary: 'U' - urban or 'R' - rural)
5. famsize - family size (binary: 'LE3' - less or equal to 3 or 'GT3' - greater than 3)
6. Pstatus - parent's cohabitation status (binary: 'T' - living together or 'A' - apart)
7. Medu - mother's education (numeric: 0 - none, 1 - primary education (4th grade), 2 โ 5th to 9th grade, 3 โ secondary education or 4 โ higher education)
8. Fedu - father's education (numeric: 0 - none, 1 - primary education (4th grade), 2 โ 5th to 9th grade, 3 โ secondary education or 4 โ higher education)
9. Mjob - mother's job (nominal: 'teacher', 'health' care related, civil 'services' (e.g. administrative or police), 'at_home' or 'other')
10. Fjob - father's job (nominal: 'teacher', 'health' care related, civil 'services' (e.g. administrative or police), 'at_home' or 'other')
11. reason - reason to choose this school (nominal: close to 'home', school 'reputation', 'course' preference or 'other')
12. guardian - student's guardian (nominal: 'mother', 'father' or 'other')
13. traveltime - home to school travel time (numeric: 1 - <15 min., 2 - 15 to 30 min., 3 - 30 min. to 1 hour, or 4 - >1 hour)
14. studytime - weekly study time (numeric: 1 - <2 hours, 2 - 2 to 5 hours, 3 - 5 to 10 hours, or 4 - >10 hours)
15. failures - number of past class failures (numeric: n if 1<=n<3, else 4)
16. schoolsup - extra educational support (binary: yes or no)
17. famsup - family educational support (binary: yes or no)
18. paid - extra paid classes within the course subject (Math or Portuguese) (binary: yes or no)
19. activities - extra-curricular activities (binary: yes or no)
20. nursery - attended nursery school (binary: yes or no)
21. higher - wants to take higher education (binary: yes or no)
22. internet - Internet access at home (binary: yes or no)
23. romantic - with a romantic relationship (binary: yes or no)
24. famrel - quality of family relationships (numeric: from 1 - very bad to 5 - excellent)
25. freetime - free time after school (numeric: from 1 - very low to 5 - very high)
26. goout - going out with friends (numeric: from 1 - very low to 5 - very high)
27. Dalc - workday alcohol consumption (numeric: from 1 - very low to 5 - very high)
28. Walc - weekend alcohol consumption (numeric: from 1 - very low to 5 - very high)
29. health - current health status (numeric: from 1 - very bad to 5 - very good)
30. absences - number of school absences (numeric: from 0 to 93)
These grades are related with the course subject, Math or Portuguese:
31. G1 - first period grade (numeric: from 0 to 20)
31. G2 - second period grade (numeric: from 0 to 20)
32. G3 - final grade (numeric: from 0 to 20, output target)
**Additional note:** there are several (382) students that belong to both datasets .
These students can be identified by searching for identical attributes
that characterize each student, as shown in the annexed R file.
<h2>Source Information</h2>
P. Cortez and A. Silva. Using Data Mining to Predict Secondary School Student Performance. In A. Brito and J. Teixeira Eds., Proceedings of 5th FUture BUsiness TEChnology Conference (FUBUTEC 2008) pp. 5-12, Porto, Portugal, April, 2008, EUROSIS, ISBN 978-9077381-39-7.
Fabio Pagnotta, Hossain Mohammad Amran.
Email:[email protected], mohammadamra.hossain '@' studenti.unicam.it
University Of Camerino
https://archive.ics.uci.edu/ml/datasets/STUDENT+ALCOHOL+CONSUMPTION
|
Student Alcohol Consumption
|
Social, gender and study data from secondary school students
|
CC0: Public Domain
| 2
| 2
| null | 0
| 0.090832
| 0.037527
| 0.166821
| 0.059961
| 0.088785
| 0.085062
| 75
|
15,812
| 4,538
| 1,344,447
| 1,344,447
| null | 7,213
| 7,213
| 10,288
|
Dataset
|
11/13/2017 16:09:15
|
02/06/2018
| 1,193,747
| 135,645
| 4,548
| 278
| 1
|
11/06/2019
| 4,538
|
### Context
High-quality financial data is expensive to acquire and is therefore rarely shared for free. Here I provide the full historical daily price and volume data for all US-based stocks and ETFs trading on the NYSE, NASDAQ, and NYSE MKT. It's one of the best datasets of its kind you can obtain.
### Content
The data (last updated 11/10/2017) is presented in CSV format as follows: Date, Open, High, Low, Close, Volume, OpenInt. Note that prices have been adjusted for dividends and splits.
### Acknowledgements
This dataset belongs to me. Iโm sharing it here for free. You may do with it as you wish.
### Inspiration
Many have tried, but most have failed, to predict the stock market's ups and downs. Can you do any better?
|
Huge Stock Market Dataset
|
Historical daily prices and volumes of all U.S. stocks and ETFs
|
CC0: Public Domain
| 3
| 3
| null | 0
| 0.098745
| 0.083052
| 0.136309
| 0.036555
| 0.088665
| 0.084952
| 76
|
3,974
| 199,387
| 348,067
| 348,067
| null | 5,793,796
| 5,870,478
| 210,356
|
Dataset
|
05/20/2019 23:26:06
|
05/20/2019
| 1,039,003
| 161,039
| 2,523
| 450
| 1
|
03/10/2020
| 199,387
|
### Description
This is a countrywide car accident dataset that covers __49 states of the USA__. The accident data were collected from __February 2016 to March 2023__, using multiple APIs that provide streaming traffic incident (or event) data. These APIs broadcast traffic data captured by various entities, including the US and state departments of transportation, law enforcement agencies, traffic cameras, and traffic sensors within the road networks. The dataset currently contains approximately __7.7 million__ accident records. For more information about this dataset, please visit [here](https://smoosavi.org/datasets/us_accidents).
### Acknowledgements
If you use this dataset, please kindly cite the following papers:
- Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, and Rajiv Ramnath. โ[A Countrywide Traffic Accident Dataset](https://arxiv.org/abs/1906.05409).โ, 2019.
- Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, Radu Teodorescu, and Rajiv Ramnath. ["Accident Risk Prediction based on Heterogeneous Sparse Data: New Dataset and Insights."](https://arxiv.org/abs/1909.09638) In proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, ACM, 2019.
### Content
This dataset was collected in real-time using multiple Traffic APIs. It contains accident data collected from February 2016 to March 2023 for the Contiguous United States. For more details about this dataset, please visit [here].
### Inspiration
The US-Accidents dataset can be used for numerous applications, such as real-time car accident prediction, studying car accident hotspot locations, casualty analysis, extracting cause and effect rules to predict car accidents, and studying the impact of precipitation or other environmental stimuli on accident occurrence. The most recent release of the dataset can also be useful for studying the impact of COVID-19 on traffic behavior and accidents.
### Sampled Data (New!)
For those requiring a smaller, more manageable dataset, [a sampled version is available which includes 500,000 accidents](https://drive.google.com/file/d/1U3u8QYzLjnEaSurtZfSAS_oh9AT2Mn8X). This sample is extracted from the original dataset for easier handling and analysis.
### Other Details
Please note that the dataset may be missing data for certain days, which could be due to network connectivity issues during data collection. Regrettably, the dataset will no longer be updated, and this version should be considered the latest.
### Usage Policy and Legal Disclaimer
This dataset is being distributed solely for research purposes under the Creative Commons Attribution-Noncommercial-ShareAlike license (CC BY-NC-SA 4.0). By downloading the dataset, you agree to use it only for non-commercial, research, or academic applications. If you use this dataset, it is necessary to cite the papers mentioned above.
### Inquiries or need help?
For any inquiries or assistance, please contact Sobhan Moosavi at [email protected]
|
US Accidents (2016 - 2023)
|
A Countrywide Traffic Accident Dataset (2016 - 2023)
|
CC BY-NC-SA 4.0
| 13
| 1
| null | 0
| 0.085944
| 0.046073
| 0.161827
| 0.059172
| 0.088254
| 0.084575
| 77
|
127,440
| 1,818,188
| 8,833,583
| 8,833,583
| null | 2,965,537
| 3,013,202
| 1,840,895
|
Dataset
|
12/24/2021 14:53:06
|
12/24/2021
| 687,031
| 213,334
| 660
| 519
| 2
|
07/22/2025
| 1,818,188
|

### Description:
The sinking of the Titanic is one of the most infamous shipwrecks in history.
On April 15, 1912, during her maiden voyage, the widely considered โunsinkableโ RMS Titanic sank after colliding with an iceberg. Unfortunately, there werenโt enough lifeboats for everyone on board, resulting in the death of 1502 out of 2224 passengers and crew.
While there was some element of luck involved in surviving, it seems some groups of people were more likely to survive than others.
In this challenge, we ask you to build a predictive model that answers the question: โwhat sorts of people were more likely to survive?โ using passenger data (ie name, age, gender, socio-economic class, etc).
### Acknowledgements:
This dataset has been referred from Kaggle: https://www.kaggle.com/c/titanic/data.
### Objective:
- Understand the Dataset & cleanup (if required).
- Build a strong classification model to predict whether the passenger survives or not.
- Also fine-tune the hyperparameters & compare the evaluation metrics of various classification algorithms.
|
Titanic Dataset
|
Titanic Survival Prediction Dataset
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.05683
| 0.012052
| 0.214378
| 0.068245
| 0.087876
| 0.084227
| 78
|
12,691
| 6,057
| 1,132,983
| null | 8
| 285,982
| 298,461
| 12,426
|
Dataset
|
12/04/2017 18:00:36
|
12/04/2017
| 318,711
| 0
| 619
| 2,372
| 1
|
07/22/2025
| 6,057
|
### Context
This dataset contains all stories and comments from Hacker News from its launch in 2006. Each story contains a story id, the author that made the post, when it was written, and the number of points the story received. Hacker News is a social news website focusing on computer science and entrepreneurship. It is run by Paul Graham's investment fund and startup incubator, Y Combinator. In general, content that can be submitted is defined as "anything that gratifies one's intellectual curiosity".
### Content
Each story contains a story ID, the author that made the post, when it was written, and the number of points the story received.
Please note that the text field includes profanity. All texts are the authorโs own, do not necessarily reflect the positions of Kaggle or Hacker News, and are presented without endorsement.
## Querying BigQuery tables
You can use the BigQuery Python client library to query tables in this dataset in Kernels. Note that methods available in Kernels are limited to querying data. Tables are at `bigquery-public-data.hacker_news.[TABLENAME]`. **Fork [this kernel][1] to get started**.
### Acknowledgements
This dataset was kindly made publicly available by [Hacker News][2] under [the MIT license][3].
### Inspiration
- Recent studies have found that many forums tend to be dominated by a
very small fraction of users. Is this true of Hacker News?
- Hacker News has received complaints that the site is biased towards Y
Combinator startups. Do the data support this?
- Is the amount of coverage by Hacker News predictive of a startupโs
success?
[1]: https://www.kaggle.com/mrisdal/mentions-of-kaggle-on-hacker-news
[2]: https://github.com/HackerNews/API
[3]: https://github.com/HackerNews/API/blob/master/LICENSE
|
Hacker News
|
All posts from Y Combinator's social news website from 2006 to late 2017
|
CC0: Public Domain
| 2
| 2
|
crime, military, public safety
| 3
| 0.026363
| 0.011304
| 0
| 0.3119
| 0.087392
| 0.083782
| 79
|
28,912
| 85,351
| 2,227,692
| 2,227,692
| null | 196,980
| 208,198
| 94,850
|
Dataset
|
12/01/2018 19:18:25
|
12/01/2018
| 879,603
| 148,403
| 3,075
| 533
| 1
|
11/06/2019
| 85,351
|
### Content
This compiled dataset pulled from four other datasets linked by time and place, and was built to find signals correlated to increased suicide rates among different cohorts globally, across the socio-economic spectrum.
### References
United Nations Development Program. (2018). Human development index (HDI). Retrieved from http://hdr.undp.org/en/indicators/137506
World Bank. (2018). World development indicators: GDP (current US$) by country:1985 to 2016. Retrieved from http://databank.worldbank.org/data/source/world-development-indicators#
[Szamil]. (2017). Suicide in the Twenty-First Century [dataset]. Retrieved from https://www.kaggle.com/szamil/suicide-in-the-twenty-first-century/notebook
World Health Organization. (2018). Suicide prevention. Retrieved from http://www.who.int/mental_health/suicide-prevention/en/
### Inspiration
Suicide Prevention.
|
Suicide Rates Overview 1985 to 2016
|
Compares socio-economic info with suicide rates by year and country
|
World Bank Dataset Terms of Use
| 1
| 1
| null | 0
| 0.072759
| 0.056153
| 0.149129
| 0.070085
| 0.087032
| 0.083451
| 80
|
8,220
| 8,782
| 760,904
| 760,904
| null | 2,431,805
| 2,474,046
| 15,961
|
Dataset
|
01/06/2018 14:07:17
|
02/05/2018
| 668,859
| 142,960
| 2,000
| 824
| 1
|
11/06/2019
| 8,782
|
### Context
This dataset contains 4242 images of flowers.
The data collection is based on the data flicr, google images, yandex images.
You can use this datastet to recognize plants from the photo.
### Content
The pictures are divided into five classes: chamomile, tulip, rose, sunflower, dandelion.
For each class there are about 800 photos. Photos are not high resolution, about 320x240 pixels. Photos are not reduced to a single size, they have different proportions!
#Acknowledgements
The data collection is based on scraped data from flickr, google images, and yandex images.
#Inspiration
What kind of flower is that?
|
Flowers Recognition
|
This dataset contains labeled 4242 images of flowers.
|
Unknown
| 2
| 3
| null | 0
| 0.055327
| 0.036522
| 0.14366
| 0.10835
| 0.085965
| 0.082469
| 81
|
9,071
| 504
| 824,607
| null | 284
| 95,812
| 98,337
| 2,143
|
Dataset
|
12/07/2016 16:44:03
|
02/06/2018
| 829,699
| 115,392
| 2,603
| 843
| 1
|
11/06/2019
| 504
|
# Context
Information on more than 180,000 Terrorist Attacks
The Global Terrorism Database (GTD) is an open-source database including information on terrorist attacks around the world from 1970 through 2017. The GTD includes systematic data on domestic as well as international terrorist incidents that have occurred during this time period and now includes more than 180,000 attacks. The database is maintained by researchers at the National Consortium for the Study of Terrorism and Responses to Terrorism (START), headquartered at the University of Maryland.
[More Information][1]
# Content
Geography: Worldwide
Time period: 1970-2017, *except 1993*
Unit of analysis: Attack
Variables: >100 variables on location, tactics, perpetrators, targets, and outcomes
Sources: Unclassified media articles (Note: Please interpret changes over time with caution. Global patterns are driven by diverse trends in particular regions, and data collection is influenced by fluctuations in access to media coverage over both time and place.)
Definition of terrorism:
"The threatened or actual use of illegal force and violence by a non-state actor to attain a political, economic, religious, or social goal through fear, coercion, or intimidation."
See the [GTD Codebook][2] for important details on data collection methodology, definitions, and coding schema.
# Acknowledgements
The Global Terrorism Database is funded through START, by the US Department of State (Contract Number: SAQMMA12M1292) and the US Department of Homeland Security Science and Technology Directorateโs Office of University Programs (Award Number 2012-ST-061-CS0001, CSTAB 3.1). The coding decisions and classifications contained in the database are determined independently by START researchers and should not be interpreted as necessarily representing the official views or policies of the United States Government.
[GTD Team][3]
#Publications
The GTD has been leveraged extensively in [scholarly publications][4], [reports][5], and [media articles][6]. *[Putting Terrorism in Context: Lessons from the Global Terrorism Database][7]*, by GTD principal investigators LaFree, Dugan, and Miller investigates patterns of terrorism and provides perspective on the challenges of data collection and analysis. The GTD's data collection manager, Michael Jensen, discusses important [Benefits and Drawbacks of Methodological Advancements in Data Collection and Coding][8].
# Terms of Use
Use of the data signifies your agreement to the following [terms and conditions][9].
END USER LICENSE AGREEMENT WITH UNIVERSITY OF MARYLAND
IMPORTANT โ THIS IS A LEGAL AGREEMENT BETWEEN YOU ("You") AND THE UNIVERSITY OF MARYLAND, a public agency and instrumentality of the State of Maryland, by and through the National Consortium for the Study of Terrorism and Responses to Terrorism (โSTART,โ โUS,โ โWEโ or โUniversityโ). PLEASE READ THIS END USER LICENSE AGREEMENT (โEULAโ) BEFORE ACCESSING THE Global Terrorism Database (โGTDโ). THE TERMS OF THIS EULA GOVERN YOUR ACCESS TO AND USE OF THE GTD WEBSITE, THE DATA, THE CODEBOOK, AND ANY AUXILIARY MATERIALS. BY ACCESSING THE GTD, YOU SIGNIFY THAT YOU HAVE READ, UNDERSTAND, ACCEPT, AND AGREE TO ABIDE BY THESE TERMS AND CONDITIONS. IF YOU DO NOT ACCEPT THE TERMS OF THIS EULA, DO NOT ACCESS THE GTD.
TERMS AND CONDITIONS
1. GTD means Global Terrorism Database data and the online user interface (www.start.umd.edu/gtd) produced and maintained by the National Consortium for the Study of Terrorism and Responses to Terrorism (START). This includes the data and codebook, any auxiliary materials present, and the user interface by which the data are presented.
2. LICENSE GRANT. University hereby grants You a revocable, non-exclusive, non-transferable right and license to access the GTD and use the data, the codebook, and any auxiliary materials solely for non-commercial research and analysis.
3. RESTRICTIONS. You agree to NOT:
a. publicly post or display the data, the codebook, or any auxiliary materials without express written permission by University of Maryland (this excludes publication of analysis or visualization of the data for non-commercial purposes);
b. sell, license, sublicense, or otherwise distribute the data, the codebook, or any auxiliary materials to third parties for cash or other considerations;
c. modify, hide, delete or interfere with any notices that are included on the GTD or the codebook, or any auxiliary materials;
d. use the GTD to draw conclusions about the official legal status or criminal record of an individual, or the status of a criminal or civil investigation;
e. interfere with or disrupt the GTD website or servers and networks connected to the GTD website; or
f. use robots, spiders, crawlers, automated devices and similar technologies to screen-scrape the site or to engage in data aggregation or indexing of the data, the codebook, or any auxiliary materials other than in accordance with the siteโs robots.txt file.
4. YOUR RESPONSIBILITIES:
a. All information sourced from the GTD should be acknowledged and cited as follows: "National Consortium for the Study of Terrorism and Responses to Terrorism (START), University of Maryland. (2018). The Global Terrorism Database (GTD) [Data file]. Retrieved from https://www.start.umd.edu/gtd"
b. You agree to acknowledge any copyrightable materials with a copyright notice โCopyright University of Maryland 2018.โ
c. Any modifications You make to the GTD for published analysis must be clearly documented and must not misrepresent analytical decisions made by START.
d. You agree to seek out an additional agreement in order to use the GTD, the data, the codebook or auxiliary materials for commercial purposes, or to create commercial product or services based on the GTD, the data, the codebook or auxiliary materials.
5. INTELLECTUAL PROPERTY. The University owns all rights, title, and interest in the GTD, the data and codebook, and all auxiliary materials. This EULA does not grant You any rights, title, or interests in the GTD or the data, the codebook, user interface, or any auxiliary materials other than those expressly granted to you under this EULA.
6. DISCLAIMER AND LIMITATION ON LIABILITY.
a. THE GTD, THE CODEBOOK, USER INTERFACE, OR ANY AUXILIARY MATERIALS ARE MADE AVAILABLE ON AN "AS IS" BASIS. UNIVERSITY DISCLAIMS ANY AND ALL REPRESENTATIONS AND WARRANTIES โ WHETHER EXPRESS OR IMPLIED, ORAL OR WRITTEN, IN FACT OR ARISING BY OPERATION OF LAW โ WITH RESPECT TO THE GTD, THE CODEBOOK, AND ANY AUXILIARY MATERIALS INCLUDING, BUT NOT LIMITED TO, WARRANTIES OF MERCHANTABILITY, SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT OF THE INTELLECTUAL PROPERTY OR PROPRIETARY RIGHTS OF ANY THIRD PARTY. UNIVERSITY MAKES NO REPRESENTATION OR WARRANTY THAT THE GTD, THE CODEBOOK, ANY AUXILIARY MATERIALS, OR USER INTERFACE WILL OPERATE ERROR FREE OR IN AN UNINTERRUPTED FASHION.
b. In no event will the University be liable to You for any incidental, special, punitive, exemplary or consequential damages of any kind, including lost profits or business interruption, even if advised of the possibility of such claims or demands, whether in contract, tort, or otherwise, arising in connection with Your access to and use of the GTD, the codebook, user interface, or any auxiliary materials or other dealings. This limitation upon damages and claims is intended to apply without regard to whether other provisions of this EULA have been breached or proven ineffective. In no event will Universityโs total liability for the breach or nonperformance of this EULA exceed the fees paid to University within the current billing cycle.
c. Every reasonable effort has been made to check sources and verify facts in the GTD; however, START cannot guarantee that accounts reported in the open literature are complete and accurate. START shall not be held liable for any loss or damage caused by errors or omissions or resulting from any use, misuse, or alteration of GTD data by the USER. The USER should not infer any additional actions or results beyond what is presented in a GTD entry and specifically, the USER should not infer an individual referenced in the GTD was charged, tried, or convicted of terrorism or any other criminal offense. If new documentation about an event becomes available, START may modify the data as necessary and appropriate.
d. University is under no obligation to update the GTD, the codebook, user interface, or any auxiliary materials.
7. INDEMNITY. You hereby agree to defend, indemnify, and hold harmless the University and its employees, agents, directors, and officers from and against any and all claims, proceedings, damages, injuries, liabilities, losses, costs, and expenses (including reasonable attorneysโ fees and litigation expenses) relating to or arising out of Your use of the GTD, the codebook, or any auxiliary materials or Your breach of any term in this EULA.
8. TERM AND TERMINATION
a. This EULA and your right to access the GTD website and use the data, the codebook, and any auxiliary materials will take effect when you access the GTD.
b. University reserves the right, at any time and without prior notice, to modify, discontinue or suspend, temporarily or permanently, Your access to the GTD website (or any part thereof) without liability to You.
9. MISCELLANEOUS
a. The University may modify this EULA at any time. Check the GTD website for modifications.
b. No term of this Agreement can be waived except by the written consent of the party waiving compliance.
c. If any provision of this EULA is determined by a court of competent jurisdiction to be void, invalid, or otherwise unenforceable, such determination shall not affect the remaining provisions of this Agreement.
d. This Agreement does not create a joint venture, partnership, employment, or agency relationship between the Parties.
e. There are no third party beneficiaries to this agreement. You may not assign this agreement without the Universityโs prior written approval.
f. This EULA shall be governed by and interpreted in accordance with United States copyright law and the laws of the State of Maryland without reference to its conflicts of laws rules. Nothing in this EULA is or shall be deemed to be a waiver by the University of any of its rights or status as an agency and instrumentality of the State of Maryland. The parties mutually agree to opt out of application of the Maryland Uniform Computer Information Transactions Act (MUCITA), Maryland Code Annotated [Commercial Law] 21-101 through 21-816 to the greatest extent authorized under MUCITA.
g. This EULA represents the entire agreement between You and the University regarding the subject matter of paragraphs 1-10. There are no other understandings, written or oral, that are not included in this Agreement.
10. REPRESENTATION. You represent that You are at least 18 years of age.
#Training
START has released the first in a series of [training modules][10] designed to equip GTD users with the knowledge and tools to best leverage the database. This training module provides a general overview of the GTD, including the data collection process, uses of the GTD, and patterns of global terrorism. Participants will learn basic data handling and how to generate summary statistics from the GTD using PivotTables in Microsoft Excel.
#Questions?
Find answers to [Frequently Asked Questions][11].
Contact the GTD staff at [email protected].
[1]: http://start.umd.edu/gtd
[2]: http://start.umd.edu/gtd/downloads/Codebook.pdf
[3]: http://start.umd.edu/gtd/about/GTDTeam.aspx
[4]: https://scholar.google.com/scholar?hl=en&q=%22Global%20Terrorism%20Database%22&btnG=&as_sdt=1%2C21&as_sdtp=
[5]: http://www.start.umd.edu/publications?combine=&author%5B%5D=13781&year%5Bvalue%5D%5Byear%5D=
[6]: http://www.start.umd.edu/start-in-the-news?combine=Global%20Terrorism%20Database
[7]: http://www.start.umd.edu/publication/putting-terrorism-context-lessons-global-terrorism-database-contemporary-terrorism
[8]: http://www.start.umd.edu/news/discussion-point-benefits-and-drawbacks-methodological-advancements-data-collection-and-coding
[9]: http://start.umd.edu/gtd/terms-of-use/
[10]: http://www.start.umd.edu/training/using-global-terrorism-database-introduction-module-1
[11]: http://www.start.umd.edu/gtd/faq/
|
Global Terrorism Database
|
More than 180,000 terrorist attacks worldwide, 1970-2017
|
Other (specified in description)
| 3
| 3
|
python, r, sql
| 3
| 0.068631
| 0.047534
| 0.115957
| 0.110848
| 0.085742
| 0.082264
| 82
|
12,394
| 165,566
| 1,116,271
| 1,116,271
| null | 377,107
| 391,742
| 176,189
|
Dataset
|
04/14/2019 15:15:54
|
04/14/2019
| 852,583
| 144,211
| 1,515
| 753
| 1
|
04/17/2021
| 165,566
| null |
Brain MRI Images for Brain Tumor Detection
| null |
Data files ยฉ Original Authors
| 1
| 1
| null | 0
| 0.070524
| 0.027666
| 0.144917
| 0.099014
| 0.08553
| 0.082068
| 83
|
106,196
| 1,608,934
| 6,450,724
| 6,450,724
| null | 2,645,886
| 2,689,882
| 1,629,372
|
Dataset
|
09/24/2021 12:43:45
|
09/24/2021
| 620,861
| 144,439
| 998
| 948
| 1
|
11/05/2023
| 1,608,934
|
# What is a brain tumor?
A brain tumor is a collection, or mass, of abnormal cells in your brain. Your skull, which encloses your brain, is very rigid. Any growth inside such a restricted space can cause problems. Brain tumors can be cancerous (malignant) or noncancerous (benign). When benign or malignant tumors grow, they can cause the pressure inside your skull to increase. This can cause brain damage, and it can be life-threatening.
# The importance of the subject
Early detection and classification of brain tumors is an important research domain in the field of medical imaging and accordingly helps in selecting the most convenient treatment method to save patients life therefore
# Methods
The application of deep learning approaches in context to improve health diagnosis is providing impactful solutions. According to the World Health Organization (WHO), proper brain tumor diagnosis involves detection, brain tumor location identification, and classification of the tumor on the basis of malignancy, grade, and type. This experimental work in the diagnosis of brain tumors using Magnetic Resonance Imaging (MRI) involves detecting the tumor, classifying the tumor in terms of grade, type, and identification of tumor location. This method has experimented in terms of utilizing one model for classifying brain MRI on different classification tasks rather than an individual model for each classification task. The Convolutional Neural Network (CNN) based multi-task classification is equipped for the classification and detection of tumors. The identification of brain tumor location is also done using a CNN-based model by segmenting the brain tumor.
# About Dataset
This dataset is a combination of the following three datasets :
[figshare](https://figshare.com/articles/dataset/brain_tumor_dataset/1512427)
[SARTAJ dataset](https://www.kaggle.com/sartajbhuvaji/brain-tumor-classification-mri)
[Br35H](https://www.kaggle.com/datasets/ahmedhamada0/brain-tumor-detection?select=no)
This dataset contains **7023** images of human brain MRI images which are classified into 4 classes: **glioma** - **meningioma** - **no tumor** and **pituitary**.
no tumor class images were taken from the Br35H dataset.
I think SARTAJ dataset has a problem that the glioma class images are not categorized correctly, I realized this from the results of other people's work as well as the different models I trained, which is why I deleted the images in this folder and used the images on the figshare site instead.
## Note
Pay attention that The size of the images in this dataset is different. You can resize the image to the desired size after pre-processing and removing the extra margins. This work will improve the accuracy of the model **[pre-processing code](https://github.com/masoudnick/Brain-Tumor-MRI-Classification/blob/main/Preprocessing.py)**
If you find this dataset helpful, feel free to upvote it!
And do give me your suggestions or opinions. Thank you :wink:
|
Brain Tumor MRI Dataset
|
A dataset for classify brain tumors
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.051356
| 0.018225
| 0.145146
| 0.124655
| 0.084845
| 0.081438
| 84
|
25,575
| 78,313
| 2,009,285
| 2,009,285
| null | 182,633
| 193,494
| 87,652
|
Dataset
|
11/16/2018 12:17:57
|
11/16/2018
| 840,451
| 156,793
| 1,506
| 642
| 1
|
09/09/2021
| 78,313
|
**This dataset is recreated using offline augmentation from the original dataset. The original dataset can be found on [this][1] github repo. This dataset consists of about 87K rgb images of healthy and diseased crop leaves which is categorized into 38 different classes. The total dataset is divided into 80/20 ratio of training and validation set preserving the directory structure.
A new directory containing 33 test images is created later for prediction purpose.**
[1]: https://github.com/spMohanty/PlantVillage-Dataset
|
New Plant Diseases Dataset
|
Image dataset containing different healthy and unhealthy crop leaves.
|
Data files ยฉ Original Authors
| 2
| 2
|
finance, real estate, marketing, ratings and reviews, e-commerce services, travel
| 6
| 0.069521
| 0.027501
| 0.15756
| 0.084418
| 0.08475
| 0.08135
| 85
|
15,346
| 12,603
| 1,312,319
| 1,312,319
| null | 17,232
| 17,232
| 20,066
|
Dataset
|
02/10/2018 16:55:10
|
02/10/2018
| 1,047,030
| 175,459
| 1,319
| 365
| 1
|
07/22/2025
| 12,603
|
### Context
Although there have been lot of studies undertaken in the past on factors affecting life expectancy considering demographic variables, income composition and mortality rates. It was found that affect of immunization and human development index was not taken into account in the past. Also, some of the past research was done considering multiple linear regression based on data set of one year for all the countries. Hence, this gives motivation to resolve both the factors stated previously by formulating a regression model based on mixed effects model and multiple linear regression while considering data from a period of 2000 to 2015 for all the countries. Important immunization like Hepatitis B, Polio and Diphtheria will also be considered. In a nutshell, this study will focus on immunization factors, mortality factors, economic factors, social factors and other health related factors as well. Since the observations this dataset are based on different countries, it will be easier for a country to determine the predicting factor which is contributing to lower value of life expectancy. This will help in suggesting a country which area should be given importance in order to efficiently improve the life expectancy of its population.
### Content
The project relies on accuracy of data. The Global Health Observatory (GHO) data repository under World Health Organization (WHO) keeps track of the health status as well as many other related factors for all countries The data-sets are made available to public for the purpose of health data analysis. The data-set related to life expectancy, health factors for 193 countries has been collected from the same WHO data repository website and its corresponding economic data was collected from United Nation website. Among all categories of health-related factors only those critical factors were chosen which are more representative. It has been observed that in the past 15 years , there has been a huge development in health sector resulting in improvement of human mortality rates especially in the developing nations in comparison to the past 30 years. Therefore, in this project we have considered data from year 2000-2015 for 193 countries for further analysis. The individual data files have been merged together into a single data-set. On initial visual inspection of the data showed some missing values. As the data-sets were from WHO, we found no evident errors. Missing data was handled in R software by using Missmap command. The result indicated that most of the missing data was for population, Hepatitis B and GDP. The missing data were from less known countries like Vanuatu, Tonga, Togo, Cabo Verde etc. Finding all data for these countries was difficult and hence, it was decided that we exclude these countries from the final model data-set. The final merged file(final dataset) consists of 22 Columns and 2938 rows which meant 20 predicting variables. All predicting variables was then divided into several broad categories:โImmunization related factors, Mortality factors, Economical factors and Social factors.
### Acknowledgements
The data was collected from WHO and United Nations website with the help of Deeksha Russell and Duan Wang.
### Inspiration
The data-set aims to answer the following key questions:
1. Does various predicting factors which has been chosen initially really affect the Life expectancy? What are the predicting variables actually affecting the life expectancy?
2. Should a country having a lower life expectancy value(<65) increase its healthcare expenditure in order to improve its average lifespan?
3. How does Infant and Adult mortality rates affect life expectancy?
4. Does Life Expectancy has positive or negative correlation with eating habits, lifestyle, exercise, smoking, drinking alcohol etc.
5. What is the impact of schooling on the lifespan of humans?
6. Does Life Expectancy have positive or negative relationship with drinking alcohol?
7. Do densely populated countries tend to have lower life expectancy?
8. What is the impact of Immunization coverage on life Expectancy?
|
Life Expectancy (WHO)
|
Statistical Analysis on factors influencing Life Expectancy
|
Other (specified in description)
| 1
| 1
| null | 0
| 0.086608
| 0.024086
| 0.176318
| 0.047995
| 0.083752
| 0.080429
| 86
|
172,689
| 2,527,538
| 7,089,251
| 7,089,251
| null | 4,289,678
| 4,347,518
| 2,556,231
|
Dataset
|
10/06/2022 08:55:25
|
10/06/2022
| 606,937
| 155,037
| 1,570
| 749
| 1
|
07/22/2025
| 2,527,538
|
This dataset is originally from the National Institute of Diabetes and Digestive and Kidney
Diseases. The objective of the dataset is to diagnostically predict whether a patient has diabetes,
based on certain diagnostic measurements included in the dataset. Several constraints were placed
on the selection of these instances from a larger database. In particular, all patients here are females
at least 21 years old of Pima Indian heritage.2
From the data set in the (.csv) File We can find several variables, some of them are independent
(several medical predictor variables) and only one target dependent variable (Outcome).
|
Diabetes Dataset
|
Diabetes Patients Data
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.050205
| 0.02867
| 0.155796
| 0.098488
| 0.08329
| 0.080002
| 87
|
3,395
| 3,258
| 268,812
| 268,812
| null | 5,337
| 5,337
| 8,131
|
Dataset
|
10/20/2017 15:09:18
|
02/05/2018
| 779,259
| 138,169
| 1,649
| 668
| 1
|
12/01/2019
| 3,258
|
The original [MNIST image dataset][1] of handwritten digits is a popular benchmark for image-based machine learning methods but researchers have renewed efforts to update it and develop drop-in replacements that are more challenging for computer vision and original for real-world applications. As noted in one recent replacement called the Fashion-MNIST [dataset][2], the Zalando [researchers][3] quoted the startling claim that "Most pairs of MNIST digits (784 total pixels per sample) can be distinguished pretty well by just one pixel". To stimulate the community to develop more drop-in replacements, the Sign Language MNIST is presented here and follows the same CSV format with labels and pixel values in single rows. The American Sign Language letter database of hand gestures represent a multi-class problem with 24 classes of letters (excluding J and Z which require motion).
The dataset format is patterned to match closely with the classic MNIST. Each training and test case represents a label (0-25) as a one-to-one map for each alphabetic letter A-Z (and no cases for 9=J or 25=Z because of gesture motions). The training data (27,455 cases) and test data (7172 cases) are approximately half the size of the standard MNIST but otherwise similar with a header row of label, pixel1,pixel2....pixel784 which represent a single 28x28 pixel image with grayscale values between 0-255. The original hand gesture [image data][4] represented multiple users repeating the gesture against different backgrounds. The Sign Language MNIST data came from greatly extending the small number (1704) of the color images included as not cropped around the hand region of interest. To create new data, an image pipeline was used based on ImageMagick and included cropping to hands-only, gray-scaling, resizing, and then creating at least 50+ variations to enlarge the quantity. The modification and expansion strategy was filters ('Mitchell', 'Robidoux', 'Catrom', 'Spline', 'Hermite'), along with 5% random pixelation, +/- 15% brightness/contrast, and finally 3 degrees rotation. Because of the tiny size of the images, these modifications effectively alter the resolution and class separation in interesting, controllable ways.
This dataset was inspired by the Fashion-MNIST [2] and the machine learning pipeline for gestures by Sreehari [4].
A robust visual recognition algorithm could provide not only new benchmarks that challenge modern machine learning methods such as Convolutional Neural Nets but also could pragmatically help the deaf and hard-of-hearing better communicate using computer vision applications. The National Institute on Deafness and other Communications Disorders (NIDCD) indicates that the 200-year-old American Sign Language is a complete, complex language (of which letter gestures are only part) but is the primary language for many deaf North Americans. ASL is the leading minority language in the U.S. after the "big four": Spanish, Italian, German, and French. One could implement computer vision in an inexpensive board computer like Raspberry Pi with OpenCV, and some Text-to-Speech to enabling improved and automated translation applications.
[1]: https://en.wikipedia.org/wiki/MNIST_database
[2]: https://github.com/zalandoresearch/fashion-mnist
[3]: https://arxiv.org/abs/1708.07747
[4]: https://github.com/mon95/Sign-Language-and-Static-gesture-recognition-using-sklearn
|
Sign Language MNIST
|
Drop-In Replacement for MNIST for Hand Gesture Recognition Tasks
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.064459
| 0.030113
| 0.138845
| 0.087837
| 0.080313
| 0.077251
| 88
|
57,726
| 982,921
| 5,618,523
| 5,618,523
| null | 1,660,340
| 1,696,625
| 999,426
|
Dataset
|
11/19/2020 07:38:44
|
11/19/2020
| 990,568
| 129,760
| 2,388
| 478
| 1
|
12/26/2020
| 982,921
|
A manager at the bank is disturbed with more and more customers leaving their credit card services. They would really appreciate if one could predict for them who is gonna get churned so they can proactively go to the customer to provide them better services and turn customers' decisions in the opposite direction
I got this dataset from a website with the URL as https://leaps.analyttica.com/home. I have been using this for a while to get datasets and accordingly work on them to produce fruitful results. The site explains how to solve a particular business problem.
Now, this dataset consists of 10,000 customers mentioning their age, salary, marital_status, credit card limit, credit card category, etc. There are nearly 18 features.
We have only 16.07% of customers who have churned. Thus, it's a bit difficult to train our model to predict churning customers.
|
Credit Card customers
|
Predict Churning customers
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.081938
| 0.043608
| 0.130395
| 0.062853
| 0.079699
| 0.076682
| 89
|
42,428
| 818,300
| 3,650,837
| 3,650,837
| null | 1,400,440
| 1,433,199
| 833,406
|
Dataset
|
08/05/2020 21:27:01
|
08/05/2020
| 886,265
| 167,408
| 873
| 463
| 1
|
07/22/2025
| 818,300
|
### Context
This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective is to predict based on diagnostic measurements whether a patient has diabetes.
### Content
Several constraints were placed on the selection of these instances from a larger database. In particular, all patients here are females at least 21 years old of Pima Indian heritage.
- Pregnancies: Number of times pregnant
- Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test
- BloodPressure: Diastolic blood pressure (mm Hg)
- SkinThickness: Triceps skin fold thickness (mm)
- Insulin: 2-Hour serum insulin (mu U/ml)
- BMI: Body mass index (weight in kg/(height in m)^2)
- DiabetesPedigreeFunction: Diabetes pedigree function
- Age: Age (years)
- Outcome: Class variable (0 or 1)
#### Sources:
(a) Original owners: National Institute of Diabetes and Digestive and
Kidney Diseases
(b) Donor of database: Vincent Sigillito ([email protected])
Research Center, RMI Group Leader
Applied Physics Laboratory
The Johns Hopkins University
Johns Hopkins Road
Laurel, MD 20707
(301) 953-6231
(c) Date received: 9 May 1990
#### Past Usage:
1. Smith,~J.~W., Everhart,~J.~E., Dickson,~W.~C., Knowler,~W.~C., \&
Johannes,~R.~S. (1988). Using the ADAP learning algorithm to forecast
the onset of diabetes mellitus. In {\it Proceedings of the Symposium
on Computer Applications and Medical Care} (pp. 261--265). IEEE
Computer Society Press.
The diagnostic, binary-valued variable investigated is whether the
patient shows signs of diabetes according to World Health Organization
criteria (i.e., if the 2 hour post-load plasma glucose was at least
200 mg/dl at any survey examination or if found during routine medical
care). The population lives near Phoenix, Arizona, USA.
Results: Their ADAP algorithm makes a real-valued prediction between
0 and 1. This was transformed into a binary decision using a cutoff of
0.448. Using 576 training instances, the sensitivity and specificity
of their algorithm was 76% on the remaining 192 instances.
#### Relevant Information:
Several constraints were placed on the selection of these instances from
a larger database. In particular, all patients here are females at
least 21 years old of Pima Indian heritage. ADAP is an adaptive learning
routine that generates and executes digital analogs of perceptron-like
devices. It is a unique algorithm; see the paper for details.
#### Number of Instances: 768
#### Number of Attributes: 8 plus class
#### For Each Attribute: (all numeric-valued)
1. Number of times pregnant
2. Plasma glucose concentration a 2 hours in an oral glucose tolerance test
3. Diastolic blood pressure (mm Hg)
4. Triceps skin fold thickness (mm)
5. 2-Hour serum insulin (mu U/ml)
6. Body mass index (weight in kg/(height in m)^2)
7. Diabetes pedigree function
8. Age (years)
9. Class variable (0 or 1)
#### Missing Attribute Values: Yes
#### Class Distribution: (class value 1 is interpreted as "tested positive for
diabetes")
|
Diabetes Dataset
|
This dataset is originally from the N. Inst. of Diabetes & Diges. & Kidney Dis.
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.07331
| 0.015942
| 0.168227
| 0.060881
| 0.07959
| 0.076581
| 90
|
18,392
| 623,289
| 1,526,260
| 1,526,260
| null | 1,111,676
| 1,141,936
| 637,462
|
Dataset
|
04/27/2020 07:27:19
|
04/27/2020
| 352,560
| 150,889
| 526
| 945
| 1
|
07/22/2025
| 623,289
|
### Context
A new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. โฆ The images were chosen from six different Flickr groups, and tend not to contain any well-known people or locations, but were manually selected to depict a variety of scenes and situations
### Content66
What's inside is more than just rows and columns. Make it easy for others to get started by describing how you acquired the data and what time period it represents, too.
### Acknowledgements
We wouldn't be here without the help of others. If you owe any attributions or thanks, include them here along with any citations of past research.
### Inspiration
Your data will be in front of the world's largest data science community. What questions do you want to see answered?
|
Flickr 8k Dataset
|
Flickr8k Dataset for image captioning.
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.029163
| 0.009605
| 0.151628
| 0.12426
| 0.078664
| 0.075723
| 91
|
5,492
| 5,839
| 998,023
| null | 1,146
| 18,613
| 18,613
| 12,132
|
Dataset
|
12/01/2017 19:19:36
|
02/06/2018
| 814,395
| 135,151
| 1,430
| 646
| 1
|
11/06/2019
| 5,839
|
# NIH Chest X-ray Dataset
---
### National Institutes of Health Chest X-Ray Dataset
Chest X-ray exams are one of the most frequent and cost-effective medical imaging examinations available. However, clinical diagnosis of a chest X-ray can be challenging and sometimes more difficult than diagnosis via chest CT imaging. The lack of large publicly available datasets with annotations means it is still very difficult, if not impossible, to achieve clinically relevant computer-aided detection and diagnosis (CAD) in real world medical sites with chest X-rays. One major hurdle in creating large X-ray image datasets is the lack resources for labeling so many images. Prior to the release of this dataset, [Openi][1] was the largest publicly available source of chest X-ray images with 4,143 images available.
This NIH Chest X-ray Dataset is comprised of 112,120 X-ray images with disease labels from 30,805 unique patients. To create these labels, the authors used Natural Language Processing to text-mine disease classifications from the associated radiological reports. The labels are expected to be >90% accurate and suitable for weakly-supervised learning. The original radiology reports are not publicly available but you can find more details on the labeling process in this Open Access paper: "ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases." (*Wang et al.*)
[Link to paper][30]
[1]: https://openi.nlm.nih.gov/
<br>
### Data limitations:
1. The image labels are NLP extracted so there could be some erroneous labels but the NLP labeling accuracy is estimated to be >90%.
2. Very limited numbers of disease region bounding boxes (See BBox_list_2017.csv)
3. Chest x-ray radiology reports are not anticipated to be publicly shared. Parties who use this public dataset are encouraged to share their โupdatedโ image labels and/or new bounding boxes in their own studied later, maybe through manual annotation
<br>
### File contents
- **Image format**: 112,120 total images with size 1024 x 1024
- **images_001.zip**: Contains 4999 images
- **images_002.zip**: Contains 10,000 images
- **images_003.zip**: Contains 10,000 images
- **images_004.zip**: Contains 10,000 images
- **images_005.zip**: Contains 10,000 images
- **images_006.zip**: Contains 10,000 images
- **images_007.zip**: Contains 10,000 images
- **images_008.zip**: Contains 10,000 images
- **images_009.zip**: Contains 10,000 images
- **images_010.zip**: Contains 10,000 images
- **images_011.zip**: Contains 10,000 images
- **images_012.zip**: Contains 7,121 images
- **README_ChestXray.pdf**: Original README file
- **BBox_list_2017.csv**: Bounding box coordinates. *Note: Start at x,y, extend horizontally w pixels, and vertically h pixels*
- Image Index: File name
- Finding Label: Disease type (Class label)
- Bbox x
- Bbox y
- Bbox w
- Bbox h
- **Data_entry_2017.csv**: Class labels and patient data for the entire dataset
- Image Index: File name
- Finding Labels: Disease type (Class label)
- Follow-up #
- Patient ID
- Patient Age
- Patient Gender
- View Position: X-ray orientation
- OriginalImageWidth
- OriginalImageHeight
- OriginalImagePixelSpacing_x
- OriginalImagePixelSpacing_y
<br>
### Class descriptions
There are 15 classes (14 diseases, and one for "No findings"). Images can be classified as "No findings" or one or more disease classes:
- Atelectasis
- Consolidation
- Infiltration
- Pneumothorax
- Edema
- Emphysema
- Fibrosis
- Effusion
- Pneumonia
- Pleural_thickening
- Cardiomegaly
- Nodule Mass
- Hernia
<br>
### Full Dataset Content
There are 12 zip files in total and range from ~2 gb to 4 gb in size. Additionally, we randomly sampled 5% of these images and created a smaller dataset for use in Kernels. The random sample contains 5606 X-ray images and class labels.
- [Sample][9]: sample.zip
[9]: https://www.kaggle.com/nih-chest-xrays/sample
<br>
### Modifications to original data
- Original TAR archives were converted to ZIP archives to be compatible with the Kaggle platform
- CSV headers slightly modified to be more explicit in comma separation and also to allow fields to be self-explanatory
<br>
### Citations
- Wang X, Peng Y, Lu L, Lu Z, Bagheri M, Summers RM. ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases. IEEE CVPR 2017, [ChestX-ray8_Hospital-Scale_Chest_CVPR_2017_paper.pdf][30]
- NIH News release: [NIH Clinical Center provides one of the largest publicly available chest x-ray datasets to scientific community][30]
- Original source files and documents: [https://nihcc.app.box.com/v/ChestXray-NIHCC/folder/36938765345][31]
<br>
### Acknowledgements
This work was supported by the Intramural Research Program of the NClinical Center (clinicalcenter.nih.gov) and National Library of Medicine (www.nlm.nih.gov).
[30]: https://www.nih.gov/news-events/news-releases/nih-clinical-center-provides-one-largest-publicly-available-chest-x-ray-datasets-scientific-community
[31]: https://nihcc.app.box.com/v/ChestXray-NIHCC/folder/36938765345
|
NIH Chest X-rays
|
Over 112,000 Chest X-ray images from more than 30,000 unique patients
|
CC0: Public Domain
| 3
| 3
| null | 0
| 0.067365
| 0.026113
| 0.135813
| 0.084944
| 0.078559
| 0.075626
| 92
|
23,441
| 102,285
| 1,840,515
| 1,840,515
| null | 242,592
| 254,413
| 111,993
|
Dataset
|
01/08/2019 13:01:57
|
01/08/2019
| 840,080
| 183,002
| 581
| 363
| 2
|
07/22/2025
| 102,285
|
### Context
MNIST is a subset of a larger set available from NIST (it's copied from http://yann.lecun.com/exdb/mnist/)
### Content
The MNIST database of handwritten digits has a training set of 60,000 examples, and a test set of 10,000 examples. .
Four files are available:
- train-images-idx3-ubyte.gz: training set images (9912422 bytes)
- train-labels-idx1-ubyte.gz: training set labels (28881 bytes)
- t10k-images-idx3-ubyte.gz: test set images (1648877 bytes)
- t10k-labels-idx1-ubyte.gz: test set labels (4542 bytes)
### How to read
See [sample MNIST reader][1]
### Acknowledgements
* Yann LeCun, Courant Institute, NYU
* Corinna Cortes, Google Labs, New York
* Christopher J.C. Burges, Microsoft Research, Redmond
### Inspiration
Many methods have been tested with this training set and test set (see http://yann.lecun.com/exdb/mnist/ for more details)
[1]: https://www.kaggle.com/hojjatk/read-mnist-dataset
|
MNIST Dataset
|
The MNIST database of handwritten digits (http://yann.lecun.com)
|
Data files ยฉ Original Authors
| 1
| 1
| null | 0
| 0.06949
| 0.01061
| 0.183898
| 0.047732
| 0.077932
| 0.075045
| 93
|
184,257
| 2,818,963
| 9,355,447
| 9,355,447
| null | 4,862,520
| 4,929,374
| 2,853,848
|
Dataset
|
01/17/2023 06:21:15
|
01/17/2023
| 783,483
| 201,218
| 1,185
| 173
| 2
|
07/28/2025
| 2,818,963
|
This dataset is having the data of 1K+ Amazon Product's Ratings and Reviews as per their details listed on the official website of Amazon
**Features**
- product_id - Product ID
- product_name - Name of the Product
- category - Category of the Product
- discounted_price - Discounted Price of the Product
- actual_price - Actual Price of the Product
- discount_percentage - Percentage of Discount for the Product
- rating - Rating of the Product
- rating_count - Number of people who voted for the Amazon rating
- about_product - Description about the Product
- user_id - ID of the user who wrote review for the Product
- user_name - Name of the user who wrote review for the Product
- review_id - ID of the user review
- review_title - Short review
- review_content - Long review
- img_link - Image Link of the Product
- product_link - Official Website Link of the Product
**Inspiration**
Amazon is an American Tech Multi-National Company whose business interests include E-commerce, where they buy and store the inventory, and take care of everything from shipping and pricing to customer service and returns. I've created this dataset so that people can play with this dataset and do a lot of things as mentioned below
- Dataset Walkthrough
- Understanding Dataset Hierarchy
- Data Preprocessing
- Exploratory Data Analysis
- Data Visualization
- Making Recommendation System
This is a list of some of that things that you can do on this dataset. It's not definitely limited to the one that is mentioned there but a lot more other things can also be done.
|
Amazon Sales Dataset
|
This dataset is having the data of 1K+ Amazon Product's Ratings and Reviews
|
CC BY-NC-SA 4.0
| 1
| 1
| null | 0
| 0.064808
| 0.021639
| 0.202203
| 0.022748
| 0.07785
| 0.074968
| 94
|
3,727
| 10,100
| 484,516
| null | 1,029
| 3,316,532
| 3,367,434
| 17,447
|
Dataset
|
01/17/2018 17:27:37
|
02/06/2018
| 925,957
| 148,785
| 1,734
| 379
| 1
|
11/06/2019
| 10,100
|
## Context
This dataset is a subset of Yelp's businesses, reviews, and user data. It was originally put together for the Yelp Dataset Challenge which is a chance for students to conduct research or analysis on Yelp's data and share their discoveries. In the most recent dataset you'll find information about businesses across 8 metropolitan areas in the USA and Canada.
## Content<br>
This dataset contains five JSON files and the user agreement.
More information about those files can be found [here](https://www.yelp.com/dataset).
## Code snippet to read the files
in Python, you can read the JSON files like this (using the json and pandas libraries):
```
import json
import pandas as pd
data_file = open("yelp_academic_dataset_checkin.json")
data = []
for line in data_file:
data.append(json.loads(line))
checkin_df = pd.DataFrame(data)
data_file.close()
```
|
Yelp Dataset
|
A trove of reviews, businesses, users, tips, and check-in data!
|
Other (specified in description)
| 4
| 12
|
government
| 1
| 0.076593
| 0.031665
| 0.149513
| 0.049836
| 0.076902
| 0.074088
| 95
|
12,729
| 1,815
| 1,136,135
| 1,136,135
| null | 3,139
| 3,139
| 5,328
|
Dataset
|
08/03/2017 17:06:12
|
02/05/2018
| 613,404
| 181,414
| 799
| 437
| 2
|
07/07/2025
| 1,815
|
### Context
To Explore more on Regression Algorithm
### Content
Each record in the database describes a Boston suburb or town. The data was drawn from the Boston Standard Metropolitan Statistical Area (SMSA) in 1970. The attributes are de๏ฌned as follows (taken from the UCI Machine Learning Repository1): CRIM: per capita crime rate by town
2. ZN: proportion of residential land zoned for lots over 25,000 sq.ft.
3. INDUS: proportion of non-retail business acres per town
4. CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
5. NOX: nitric oxides concentration (parts per 10 million)
1https://archive.ics.uci.edu/ml/datasets/Housing
123
20.2. Load the Dataset 124
6. RM: average number of rooms per dwelling
7. AGE: proportion of owner-occupied units built prior to 1940
8. DIS: weighted distances to ๏ฌve Boston employment centers
9. RAD: index of accessibility to radial highways
10. TAX: full-value property-tax rate per $10,000
11. PTRATIO: pupil-teacher ratio by town 12. B: 1000(Bkโ0.63)2 where Bk is the proportion of blacks by town 13. LSTAT: % lower status of the population
14. MEDV: Median value of owner-occupied homes in $1000s
We can see that the input attributes have a mixture of units.
### Acknowledgements
Thanks to Dr.Jason
|
Boston House Prices
|
Regression predictive modeling machine learning problem from end-to-end Python
|
Other (specified in description)
| 1
| 1
|
lending, banking, crowdfunding, insurance, investing, currencies and foreign exchange
| 6
| 0.05074
| 0.014591
| 0.182302
| 0.057462
| 0.076274
| 0.073505
| 96
|
25,359
| 82,373
| 1,996,737
| 1,996,737
| null | 191,501
| 202,552
| 91,803
|
Dataset
|
11/25/2018 18:12:34
|
11/25/2018
| 658,135
| 158,567
| 1,243
| 519
| 1
|
03/03/2021
| 82,373
|
### Context
The German Traffic Sign Benchmark is a multi-class, single-image classification challenge held at the International Joint Conference on Neural Networks (IJCNN) 2011. We cordially invite researchers from relevant fields to participate: The competition is designed to allow for participation without special domain knowledge. Our benchmark has the following properties:
- Single-image, multi-class classification problem
- More than 40 classes
- More than 50,000 images in total
- Large, lifelike database
### Acknowledgements
[INI Benchmark Website][1]
[1]: http://benchmark.ini.rub.de/
|
GTSRB - German Traffic Sign Recognition Benchmark
|
Multi-class, single-image classification challenge
|
CC0: Public Domain
| 1
| 1
| null | 0
| 0.05444
| 0.022699
| 0.159343
| 0.068245
| 0.076182
| 0.073419
| 97
|
6
| 17
| 993
| null | 3
| 742,210
| 762,847
| 989
|
Dataset
|
01/07/2016 00:38:08
|
02/06/2018
| 791,588
| 130,084
| 1,132
| 609
| 1
|
11/06/2019
| 17
|
*This data originally came from [Crowdflower's Data for Everyone library](http://www.crowdflower.com/data-for-everyone).*
As the original source says,
> A sentiment analysis job about the problems of each major U.S. airline. Twitter data was scraped from February of 2015 and contributors were asked to first classify positive, negative, and neutral tweets, followed by categorizing negative reasons (such as "late flight" or "rude service").
The data we're providing on Kaggle is a slightly reformatted version of the original source. It includes both a CSV file and SQLite database. The code that does these transformations is [available on GitHub](https://github.com/benhamner/crowdflower-airline-twitter-sentiment)
For example, it contains whether the sentiment of the tweets in this set was positive, neutral, or negative for six US airlines:
[](https://www.kaggle.com/benhamner/d/crowdflower/twitter-airline-sentiment/exploring-airline-twitter-sentiment-data)
|
Twitter US Airline Sentiment
|
Analyze how travelers in February 2015 expressed their feelings on Twitter
|
CC BY-NC-SA 4.0
| 4
| 3
| null | 0
| 0.065479
| 0.020672
| 0.130721
| 0.080079
| 0.074237
| 0.071611
| 98
|
75,182
| 1,633,303
| 2,487,225
| 2,487,225
| null | 12,211,878
| 12,753,598
| 1,654,019
|
Dataset
|
10/07/2021 05:42:45
|
10/07/2021
| 556,345
| 25,074
| 1,377
| 1,487
| 1
|
11/29/2021
| 1,633,303
|
# ๋น
๋ฐ์ดํฐ ๋ถ์๊ธฐ์ฌ ์ค๊ธฐ ์ค๋น๋ฅผ ์ํ ์บ๊ธ ๋์ดํฐ
์๋
ํ์ธ์ **ํด๊ทผํ๋ด์ง** ์
๋๋ค๐ค
๋น
๋ฐ์ดํฐ ๋ถ์๊ธฐ์ฌ ์ค๊ธฐ ์ค๋น๋ฅผ ์ํ ๋ฐ์ดํฐ ์
๊ณผ ํํ ๋ฆฌ์ผ์ ๊ณต์ ํฉ๋๋ค.
์
๋ฌธ์๋ผ์ ์ด ์๋ฃ๋ฅผ ๋ณด๊ณ ์์ํ ์ ์๋ค๋ฉด ์ ์ [ํด๊ทผํ๋ด์ง ์ ํ๋ธ](https://youtube.com/playlist?list=PLSlDi2AkDv82Qv7B3WiWypQSFmOCb-G_-) ๋๋ [์
๋ฌธ์๋ฅผ ์ํ ์ค๊ธฐ ๊ฐ์](https://inf.run/HYmN)๋ฅผ ์ถ์ฒํฉ๋๋ค.
๋ ์ข์ ์ฝ๋๋ฅผ ๋ง๋ ๋ค๋ฉด ๋ง์ ๊ณต์ ๋ถํ๋๋ ค์๐ (Python๊ณผ R๋ชจ๋ ํ์ํฉ๋๋ค.)
`์๋ "View more" ๋ฒํผ์ ํด๋ฆญํ๋ฉด ๋ฌธ์ ๋ฆฌ์คํธ๋ฅผ ๋ณผ ์ ์์ด์ :) `
ํด๋น ์๋ฃ๊ฐ ์ฑ์ฅ(ํ์ต)์ ๋์์ด ๋์๋ค๋ฉด [๋งํฌ](https://www.kaggle.com/datasets/agileteam/bigdatacertificationkr/discussion)๋ฅผ ํตํด ํ๊ธฐ(ํผ๋๋ฐฑ)๋ฅผ ๋ถํํด์ โ๏ธ
** ์ค๊ธฐ ์ค๋น ์ปค๋ฎค๋ํฐ **
- ๋น
๋ถ๊ธฐ ์ค๊ธฐ (์ค์ง์ด ๊ฒ์)์คํฐ๋ ์๋ด: [๋งํฌ](https://quakka.notion.site/10-1d77e3769e2e801eacced76144d3520c?pvs=4)
- ๋์ค์ฝ๋ ๋งํฌ: https://discord.gg/V8acvTnHhH
- ์ค๊ธฐ ์คํฐ๋(R, Python ๋ชจ๋ ํ์) ๋ฉค๋ฒ ๋ชจ์ง ์ค
------------------------------------------
## ๐ ์ถ์ฒ ๋์ ๋ฐ ๊ฐ์
๐ ์ถ์ฒ ๋์: [2026 ์๋๊ณต ๋น
๋ฐ์ดํฐ๋ถ์๊ธฐ์ฌ ์ค๊ธฐ(Python)](https://product.kyobobook.co.kr/detail/S000216355151)
๐ ์ถ์ฒ ๊ฐ์: [ํด๊ทผํ๋ด์ง์ ๊ฐ์](https://inf.run/XnzT) <- ์๊ฐ์ด ๋ถ์กฑํ๊ฑฐ๋ ํ์ด์ฌ์ ๋ชจ๋ฅด๋ ์
๋ฌธ์๋ผ๋ฉด ์ถ์ฒํด์:)
## ๊ณต์ ์์๋ฌธ์ (ver2025)
๐ [Python ํ์ด](https://www.kaggle.com/code/agileteam/t3-ex-ver2025-py)
๐ [R ํ์ด](https://www.kaggle.com/code/agileteam/t3-ex-ver2025-r)
## ์ ๊ท ๋ฌธ์ (2025.6)
[์์
ํ1]
๐ t1-36 ๋ถ์๋ณ ์๋ณ ํ๊ท ์ด๊ณผ๊ทผ๋ฌด์๊ฐ ๋ถ์ [P](https://www.kaggle.com/code/agileteam/t1-36-overwork-py)
๐ t1-37 ์นดํ
๊ณ ๋ฆฌ๋ณ ๊ตฌ๋งค ๋ถ์ [P](https://www.kaggle.com/code/agileteam/t1-37-purchase-py)
๐ t1-38 Groupby + Pivot + ์กฐ๊ฑด ํํฐ๋ง + ๋น์จ ๊ณ์ฐ [P](https://www.kaggle.com/code/agileteam/t1-38-pivot-py)
๐ t1-39 ํด๊ทผํ๋ด์ง ๊ต์ก ๋ถ์ [P](https://www.kaggle.com/code/agileteam/t1-39-groupby-transform-py)
[์์
ํ3]
๐ t3-์ ํํ๊ท3 [P](https://www.kaggle.com/code/agileteam/t3-regression3-py)
๐ t3-๋ก์ง์คํฑํ๊ท3 [P](https://www.kaggle.com/code/agileteam/t3-logit3-py)
------------------------------------------
## ๐ฅ ์จ๋ณด๋ฉ ๐ฅ
- ๊ธฐ์ด1 - ์ํ
, ํํฐ, ๊ทธ๋ฃนํ: [ํ์ด์ฌ ๋งํฌ](https://www.kaggle.com/code/agileteam/basic-study1-py), [R ๋งํฌ](https://www.kaggle.com/code/agileteam/basic-study1-r)
์๋ ๋งํฌ ์ ์ ํ - ์ฐ์ธก ์๋จ "Copy&Edit"
-----------------------------------------------------------------
## ๐ ์์
ํ1 ์์ ๋ฌธ์
| ๋ฒํธ | ๋ฌธ์ ์ ํ | Python | R | ์ค์๋ |
| ----- | --------------------- | --------------------------------------------------------------------------- | ------------------------------------------------------------------------------- | ------ |
| T1-1 | ์ด์์น (IQR) | [P](https://www.kaggle.com/agileteam/py-t1-1-iqr-expected-questions) | [R](https://www.kaggle.com/limmyoungjin/r-t1-1-iqr-expected-questions-2) | โญ๏ธโญ๏ธโญ๏ธ |
| T1-2 | ์ด์์น (์์์ ๋์ด) | [P](https://www.kaggle.com/agileteam/py-t1-2-expected-questions) | [R](https://www.kaggle.com/limmyoungjin/r-t1-2-expected-questions-2) | โญ๏ธโญ๏ธโญ๏ธ |
| T1-3 | ๊ฒฐ์ธก์น ์ฒ๋ฆฌ | [P](https://www.kaggle.com/agileteam/py-t1-3-map-expected-questions) | [R](https://www.kaggle.com/limmyoungjin/r-t1-3-expected-questions-2) | โญ๏ธโญ๏ธโญ๏ธ |
| T1-4 | ์๋/์ฒจ๋ (๋ก๊ทธ์ค์ผ์ผ) | [P](https://www.kaggle.com/agileteam/py-t1-4-expected-questions) | [R](https://www.kaggle.com/limmyoungjin/r-t1-4-expected-questions-2) | โญ๏ธ |
| T1-5 | ํ์คํธ์ฐจ | [P](https://www.kaggle.com/agileteam/py-t1-5-expected-questions) | [R](https://www.kaggle.com/limmyoungjin/r-t1-5-expected-questions-2) | โญ๏ธโญ๏ธโญ๏ธ |
| T1-6 | Groupby ํฉ๊ณ | [P](https://www.kaggle.com/agileteam/py-t1-6-expected-questions) | [R](https://www.kaggle.com/limmyoungjin/r-t1-6-expected-questions-2) |โญ๏ธโญ๏ธโญ๏ธ |
| T1-7 | ๊ฐ ๋ณ๊ฒฝ | [P](https://www.kaggle.com/agileteam/py-t1-7-2-expected-questions) | [R](https://www.kaggle.com/limmyoungjin/r-t1-7-2-expected-questions-2) |โญ๏ธโญ๏ธโญ๏ธ |
| T1-8 | ๋์ ํฉ/๋ณด๊ฐ | [P](https://www.kaggle.com/agileteam/py-t1-8-expected-questions) | [R](https://www.kaggle.com/limmyoungjin/r-t1-8-expected-questions-2) |โญ๏ธโญ๏ธ |
| T1-9 | ํ์คํ (์ค์๊ฐ) | [P](https://www.kaggle.com/agileteam/py-t1-9-expected-questions) | [R](https://www.kaggle.com/limmyoungjin/r-t1-9-expected-questions-2) |โญ๏ธโญ๏ธโญ๏ธ |
| T1-10 | Yeo-Johnson & Box-Cox | [P](https://www.kaggle.com/agileteam/py-t1-10-expected-questions) | [R](https://www.kaggle.com/limmyoungjin/r-t1-10-expected-questions-2) |โญ๏ธ |
| T1-11 | min-max scaling | [P](https://www.kaggle.com/agileteam/py-t1-11-min-max-5-expected-questions) | [R](https://www.kaggle.com/limmyoungjin/r-t1-11-min-max-5-expected-questions-2) |โญ๏ธโญ๏ธโญ๏ธ |
| T1-12 | ์ํ์ 10๊ฐ ์ถ์ถ | [P](https://www.kaggle.com/agileteam/py-t1-12-10-10-expected-questions) | [R](https://www.kaggle.com/limmyoungjin/r-t1-12-10-expected-questions-2) |โญ๏ธโญ๏ธโญ๏ธ |
| T1-13 | ์๊ด๊ด๊ณ | [P](https://www.kaggle.com/agileteam/py-t1-13-expected-questions) | [R](https://www.kaggle.com/limmyoungjin/r-t1-13-expected-questions-2) |โญ๏ธโญ๏ธโญ๏ธ |
| T1-14 | MultiIndex + Groupby | [P](https://www.kaggle.com/agileteam/py-t1-14-2-expected-question) | [R](https://www.kaggle.com/limmyoungjin/r-t1-14-2-expected-question-2) |โญ๏ธโญ๏ธโญ๏ธ |
| T1-15 | ์กฐ๊ฑด ํํฐ๋ง + ์ค์๊ฐ ๋์ฒด | [P](https://www.kaggle.com/agileteam/py-t1-15-expected-question) | [R](https://www.kaggle.com/limmyoungjin/r-t1-15-expected-question-2) |โญ๏ธโญ๏ธโญ๏ธ |
| T1-16 | ๋ถ์ฐ ๊ณ์ฐ | [P](https://www.kaggle.com/agileteam/py-t1-16-expected-question) | [R](https://www.kaggle.com/limmyoungjin/r-t1-16-expected-question-2) |โญ๏ธโญ๏ธ |
| T1-17 | ์๊ณ์ด 1 (datetime) | [P](https://www.kaggle.com/agileteam/py-t1-17-1-expected-question) | [R](https://www.kaggle.com/limmyoungjin/r-t1-17-1-expected-question-2) |โญ๏ธโญ๏ธโญ๏ธ |
| T1-18 | ์๊ณ์ด 2 (ํ์ผ/์ฃผ๋ง) | [P](https://www.kaggle.com/agileteam/py-t1-18-2-expected-question) | [R](https://www.kaggle.com/limmyoungjin/r-t1-18-2-expected-question-2) |โญ๏ธโญ๏ธโญ๏ธ |
| T1-19 | ์๊ณ์ด 3 (์๋ณ ์ด๊ณ) | [P](https://www.kaggle.com/agileteam/py-t1-19-3-expected-question) | [R](https://www.kaggle.com/limmyoungjin/r-t1-19-3-expected-question-2) |โญ๏ธโญ๏ธโญ๏ธ |
| T1-20 | ๋ฐ์ดํฐ ๋ณํฉ | [P](https://www.kaggle.com/agileteam/py-t1-20-expected-question) | [R](https://www.kaggle.com/limmyoungjin/r-t1-20-expected-question-2) |โญ๏ธ |
| T1-21 | Binning Data | [P](https://www.kaggle.com/agileteam/py-t1-21-expected-question) | [R](https://www.kaggle.com/limmyoungjin/r-t1-21-expected-question-2) |โญ๏ธ |
| T1-22 | ์๊ณ์ด (Weekly) | [P](https://www.kaggle.com/agileteam/t1-22-time-series4-weekly-data) | [R](https://www.kaggle.com/limmyoungjin/r-t1-22-time-series4-weekly-data-2) |โญ๏ธโญ๏ธโญ๏ธ |
| T1-23 | ์ค๋ณต ์ ๊ฑฐ ๋ฐ ๊ฒฐ์ธก์น ์ฒ๋ฆฌ | [P](https://www.kaggle.com/agileteam/t1-23-drop-duplicates) | [R](https://www.kaggle.com/limmyoungjin/r-t1-23-drop-duplicates-2) |โญ๏ธโญ๏ธโญ๏ธ |
| T1-24 | Lagged Feature | [P](https://www.kaggle.com/agileteam/t1-24-time-series5-lagged-feature) | [R](https://www.kaggle.com/limmyoungjin/r-t1-24-time-series5-2) |
| T1-25 | ๋ฌธ์์ด ์ฒ๋ฆฌ (์ฌ๋ผ์ด์ฑ) | [P](https://www.kaggle.com/agileteam/t1-25-str-slicing) | [R](https://www.kaggle.com/agileteam/t1-25-str-slicing-r) |
| T1-26 | ๋ฌธ์์ด ํฌํจ ์ฌ๋ถ | [P](https://www.kaggle.com/agileteam/t1-26-str-contains) | [R](https://www.kaggle.com/agileteam/t1-26-str-contains-r) |
| T1-27 | ๋ฌธ์์ด ์นํ | [P](https://www.kaggle.com/agileteam/t1-27-str-contains-replace) | [R](https://www.kaggle.com/agileteam/t1-27-str-contain-replace-r) |
| T1-28 | ๋น๋ ๊ณ์ฐ (value\_counts) | [P](https://www.kaggle.com/agileteam/t1-28-value-counts-index) | - |
| T1-29 | ๋ ์ง ํ์ ๋ณํ | [P](https://www.kaggle.com/agileteam/t1-29-datetime-format) | - |
| T1-30 | ์๊ณ์ด ๋น์จ ๊ณ์ฐ | [P](https://www.kaggle.com/agileteam/t1-30-datetime-percent) | - |
| T1-31 | Melt ์ ์ฒด | [P](https://www.kaggle.com/agileteam/t1-31-melt) | - |
| T1-32 | Melt ์ผ๋ถ | [P](https://www.kaggle.com/agileteam/t1-33-melt2) | - |
| T1-33 | Timedelta ๊ณ์ฐ | [P](https://www.kaggle.com/agileteam/t1-33-timedelta-py) | [R](https://www.kaggle.com/agileteam/t1-33-timedelta-r) |
| T1-34 | ํจํด/์ ํธ๋ ๋ถ์ (๋์ด๋: ์) | [P](https://www.kaggle.com/agileteam/t1-34-pattern-py) | [R](https://www.kaggle.com/agileteam/t1-34-pattern-r) |
| T1-35 | ํผ๋๋ฐฑ-๋ฌธ์์ ์๊ณ์ด (๋์ด๋: ์ค) | [P](https://www.kaggle.com/agileteam/t1-35-fb-py) | [R](https://www.kaggle.com/agileteam/t1-35-fb-r) |
## ๐ ์์
ํ2 ์์ ๋ฌธ์
| ๋ฒํธ | ๋ฌธ์ ๋ช
| ์์ธก ์ ํ | ์์ธก ๋์ | ํ๊ฐ์งํ | Python | R |
| ---- | --------------------- | ----- | ------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
| T2-1 | Titanic | ๋ถ๋ฅ | ์์กด ์ฌ๋ถ | Accuracy | [P](https://www.kaggle.com/agileteam/t2-1-titanic-simple-baseline) | [R](https://www.kaggle.com/limmyoungjin/r-t2-1-titanic) |
| T2-2 | Pima Indians Diabetes | ๋ถ๋ฅ | ๋น๋จ๋ณ ์ฌ๋ถ | AUC | [P](https://www.kaggle.com/code/agileteam/t2-2-pima-indians-diabetes-v2) | [R](https://www.kaggle.com/limmyoungjin/r-t2-2-pima-indians-diabetes) |
| T2-3 | Adult Census Income | ๋ถ๋ฅ | ๊ณ ์๋ ์ฌ๋ถ (>50K) | Accuracy | [P](https://www.kaggle.com/agileteam/t2-3-adult-census-income-tutorial) | [R](https://www.kaggle.com/limmyoungjin/r-t2-3-adult-census-income) |
| T2-4 | House Prices | ํ๊ท | ์ฃผํ ๊ฐ๊ฒฉ | RMSE | [P](https://www.kaggle.com/code/agileteam/t2-4-house-prices-regression), [XGB](https://www.kaggle.com/code/agileteam/house-prices-starter-xgb) | [R](https://www.kaggle.com/limmyoungjin/r-t2-4-house-prices) |
| T2-5 | Insurance Charges | ํ๊ท | ๋ณดํ๋ฃ | RMSE | [P](https://www.kaggle.com/agileteam/insurance-starter-tutorial) | [R](https://www.kaggle.com/limmyoungjin/r-t2-5-insurance-prediction) |
| T2-6 | Bike Sharing Demand | ํ๊ท | ๋์ฌ๋ | RMSLE | [P](https://www.kaggle.com/code/agileteam/t2-6-bike-regressor) | [R](https://www.kaggle.com/limmyoungjin/r-t2-6-bike-sharing-demand) |
### ์ปดํผํฐ์
(์์
ํ2)
| ๋ฌธ์ ๋ช
| ์์ธก ์ ํ | ํ๊ฐ์งํ | ์ปดํผํฐ์
๋งํฌ | Python | R |
| ---------- | ----- | ------------------ | ---------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------- | --------------------------------------------------------------------------- |
| ์ ์ฉ์นด๋ ์ฌ์ฉ ์ฌ๋ถ | ๋ถ๋ฅ | AUC | [Kaggle](https://www.kaggle.com/competitions/big-data-analytics-certification-kr-2024-2) | [P](https://www.kaggle.com/code/agileteam/credit-card-baseline-rf-python) | [R](https://www.kaggle.com/code/agileteam/cc-baseline-rf-r) |
| ์ฃผํ ๊ฐ๊ฒฉ ์์ธก | ํ๊ท | RMSE | [Kaggle](https://www.kaggle.com/competitions/big-data-analytics-certification-kr-2024-3) | [P](https://www.kaggle.com/code/agileteam/house-price-baseline-py-rmse26339) | [R](https://www.kaggle.com/code/agileteam/house-price-baseline-r-rmse28940) |
| ๊ฒ ๋์ด ์์ธก | ํ๊ท | MAE ๋๋ RMSE (๊ณต๊ฐ X) | [Kaggle](https://www.kaggle.com/competitions/2024-4-big-data-analytics-certification-kr) | ์ค๋น ์ค | ์ค๋น ์ค |
| ํก์ฐ์ ์์ธก | ๋ถ๋ฅ | Accuracy | [Kaggle](https://www.kaggle.com/competitions/smoker-binary-class) | ์ค๋น ์ค | ์ค๋น ์ค |
| ํ์
์ํ ์์ธก | ๋ค์ค๋ถ๋ฅ | Accuracy | [Kaggle](https://www.kaggle.com/competitions/playground-series-s4e6) | [P](https://www.kaggle.com/code/agileteam/s4e6-baseline-0-82) | - |
| ์์ธ ํ์ง ๋ถ๋ฅ | ๋ถ๋ฅ | Accuracy (์ถ์ ) | [Kaggle](https://www.kaggle.com/competitions/wine-qualit-dataset) | [P](https://www.kaggle.com/code/agileteam/wine-quality-baseline) | - |
## ๐ ์์
ํ3 ์์ ๋ฌธ์
| ๋ฌธ์ ์ ํ | Python | R |
| ---------------------- | --------------------------------------------------------------- | -------------------------------------------------------------- |
| ๋จ์ผํ๋ณธ T๊ฒ์ | [P](https://www.kaggle.com/agileteam/t3-ttest-1samp) | [R](https://www.kaggle.com/agileteam/t3-ttest-1samp-r) |
| ๋
๋ฆฝํ๋ณธ T๊ฒ์ | [P](https://www.kaggle.com/agileteam/t3-ttest-ind) | [R](https://www.kaggle.com/agileteam/t3-ttest-ind-r) |
| ๋์ํ๋ณธ T๊ฒ์ | [P](https://www.kaggle.com/agileteam/t3-example) | [R](https://www.kaggle.com/agileteam/t3-example-r) |
| ์ผ์๋ถ์ฐ๋ถ์ (ANOVA) | [P](https://www.kaggle.com/agileteam/t3-anova) | [R](https://www.kaggle.com/agileteam/t3-anova-r) |
| ์ ๊ท์ฑ ๊ฒ์ (Shapiro-Wilks) | [P](https://www.kaggle.com/agileteam/t3-shapiro-wilk) | [R](https://www.kaggle.com/agileteam/t3-shapiro-wilk-r) |
| ํ๊ท๋ชจํ (์๊ด๊ณ์) | [P](https://www.kaggle.com/agileteam/t3-correlation) | [R](https://www.kaggle.com/agileteam/t3-correlation-r) |
| ๋ก์ง์คํฑ ํ๊ท | [P](https://www.kaggle.com/code/agileteam/t3-2-example-py) | [R](https://www.kaggle.com/code/agileteam/t3-2-example-r) |
| ๋ ๊ทธ๋ฃน ํ๊ท ๋น๊ต (T/F ๊ฒ์ ) | [P](https://www.kaggle.com/code/agileteam/t3-ttest-anova-py) | [R](https://www.kaggle.com/code/agileteam/t3-ttest-anova-r) |
| ์ ํฉ๋ ๊ฒ์ (Chi-square) | [P](https://www.kaggle.com/agileteam/t3-chisquare-py) | [R](https://www.kaggle.com/agileteam/t3-chisquare-r) |
| ์ง์ง๋/์ ๋ขฐ๋/ํฅ์๋ | [P](https://www.kaggle.com/agileteam/t3-scl-py) | [R](https://www.kaggle.com/agileteam/t3-scl-r) |
| ํฌ์์ก๋ถํฌ | [P](https://www.kaggle.com/agileteam/t3-pmf-py) | [R](https://www.kaggle.com/agileteam/t3-pmf-r) |
| ๋
๋ฆฝ์ฑ ๊ฒ์ (๊ต์ฐจํ) | [P](https://www.kaggle.com/agileteam/t3-chi2-contingency-py) | [R](https://www.kaggle.com/agileteam/t3-chi2-contingency-r) |
| ๋ฒ ๋ฅด๋์ด/์ดํญ๋ถํฌ | [P](https://www.kaggle.com/agileteam/t3-probability-py) | [R](https://www.kaggle.com/agileteam/t3-probability-r) |
| ์ ์ถ์ /๊ตฌ๊ฐ์ถ์ | [P](https://www.kaggle.com/agileteam/t3-confidence-interval-py) | [R](https://www.kaggle.com/agileteam/t3-confidence-interval-r) |
| ์ด์๋ถ์ฐ๋ถ์ (Two-way ANOVA) | [P](https://www.kaggle.com/agileteam/t3-two-way-anova-py) | [R](https://www.kaggle.com/agileteam/t3-two-way-anova-r) |
| ๋ก์ง์คํฑ ํ๊ท 2๋จ๊ณ | [P](https://www.kaggle.com/code/agileteam/t3-logit-py) | [R](https://www.kaggle.com/code/agileteam/t3-logit-r) |
| ์์ฐจ์ดํ๋ | [P](https://www.kaggle.com/code/agileteam/t3-logit-deviance-py) | [R](https://www.kaggle.com/code/agileteam/t3-logit-deviance-r) |
| AIC / BIC ๊ณ์ฐ | [P](https://www.kaggle.com/code/agileteam/t3-aic-bic) | - |
| ๋ค์ค ์ ํํ๊ท | [P](https://www.kaggle.com/code/agileteam/t3-regression-py) | - |
| ๋น๋ชจ์๊ฒ์ (Wilcoxon) | [P](https://www.kaggle.com/code/agileteam/1samp-wilcoxon-py) | [R](https://www.kaggle.com/code/agileteam/1samp-wilcoxon-r) |
## ๐ ๊ธฐ์ถ ๋ฌธ์ ์ ๋ฆฌ (2~9ํ)
| ํ์ฐจ | ์์
ํ1 | ์์
ํ2 | ํ์ด ์์ |
| ----- | ---------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- |
| 2ํ | [Python](https://www.kaggle.com/agileteam/tutorial-t1-2-python) / [R](https://www.kaggle.com/limmyoungjin/tutorial-t1-2-r-2) | [Python](https://www.kaggle.com/agileteam/tutorial-t2-2-python) / [R](https://www.kaggle.com/limmyoungjin/tutorial-t2-2-r) | [์์
ํ1](https://youtu.be/Jh3rJaZlEg0) |
| 3ํ | [๋ฌธ์ 1\~3](https://www.kaggle.com/code/agileteam/3rd-type1-1-3-1-1) | [Baseline](https://www.kaggle.com/code/agileteam/3rd-type2-3-2-baseline) | [์์
ํ1&2](https://youtu.be/QpNufh_ZV7A) |
| 4ํ | [Python](https://www.kaggle.com/agileteam/4th-type1-python) / [R](https://www.kaggle.com/code/yimstar9/4th-type1) | [Python](https://www.kaggle.com/agileteam/4th-t2-python) / [R](https://www.kaggle.com/code/agileteam/4th-t2-r) | [์์
ํ1](https://youtu.be/XAT0qvN5tnA), [์์
ํ2](https://youtu.be/diP0q1YzVFg) |
| 5ํ | - | [๋ฐ์ดํฐ/์ปดํผํฐ์
](https://www.kaggle.com/competitions/big-data-analytics-certification-kr-2023-5th/) | [์์
ํ2](https://youtu.be/2n1nFbNf_5g) |
| 6\~9ํ | ์์
ํ1, 2, 3 ํฌํจ ์ ๋ฒ์ | [ใ2026 ์๋๊ณต ๋ฐ์ดํฐ๋ถ์๊ธฐ์ฌ ์ค๊ธฐใ](https://product.kyobobook.co.kr/detail/S000216355151) ๋ฐ [์ธํ๋ฐ ๊ฐ์](https://inf.run/HYmN) ์ฐธ๊ณ | ์ ๊ณต ์ค |
## ๊ณต์ ์์ ๋ฌธ์
```ํ์ด ์์: ``` ๐ฅ๏ธ [์์
ํ1](https://youtu.be/E86QFVXPm5Q), ๐ฅ๏ธ [์์
ํ2](https://youtu.be/_GIBVt5-khk)
- ์์
ํ1:
P: https://www.kaggle.com/agileteam/tutorial-t1-python
R: https://www.kaggle.com/limmyoungjin/tutorial-t1-r
- ์์
ํ2: ๋ฐฑํ์ ๊ณ ๊ฐ์ 1๋
๊ฐ ๋ฐ์ดํฐ (dataq ๊ณต์ ์์ )
P: https://www.kaggle.com/agileteam/t2-exercise-tutorial-baseline
## ๐ ๋ชจ์๊ณ ์ฌ
### ๋ชจ์๊ณ ์ฌ1
- ์์
ํ1: [ํ์ด์ฌ ๋งํฌ](https://www.kaggle.com/agileteam/mock-exam1-type1-1-tutorial), [R ๋งํฌ](https://www.kaggle.com/limmyoungjin/mock-exam1-type1-1)
- ์์
ํ2: [๋ฌธ์ ํ
ํ๋ฆฟ](https://www.kaggle.com/agileteam/mock-exam-t2-exam-template), [์ฐ ์
๋ฌธ ์ฝ๋](https://www.kaggle.com/agileteam/mock-exam-t2-starter-tutorial), [ํ์ด ์ฝ๋/๋ฒ ์ด์ค๋ผ์ธ](https://www.kaggle.com/agileteam/mock-exam-t2-baseline-tutorial)
### ๋ชจ์๊ณ ์ฌ2 (์ํํ๊ฒฝ)
- ์์
ํ1: [๋งํฌ](https://www.kaggle.com/code/agileteam/mock-exam2-type1-1-2) (์ํํ๊ฒฝ)
- ์์
ํ2(ํ๊ท): [๋งํฌ](https://www.kaggle.com/code/agileteam/t2-2-2-baseline-r2)
### ๐ ์
๋ฌธ์๋ฅผ ์ํ ๋จธ์ ๋ฌ๋ ํํ ๋ฆฌ์ผ (๊ณต์ ํด์ฃผ์ ๋
ธํธ๋ถ ์ค ์ ์ ํ์์๐)
- https://www.kaggle.com/ohseokkim/t2-2-pima-indians-diabetes ์์ฑ์: @ohseokkim ๐
- https://www.kaggle.com/wltjd54/insurance-prediction-full-ver ์์ฑ์: @wltjd54 ๐
### ๐ ์ํ ์ ๊ผญ!! ๋ด์ผ ํ ๋ด์ฉ
- ์ํํ๊ฒฝ์์ ์ ๋นํ ์ปจ๋ํ์ดํผ ๋ง๋ค๊ธฐ Guide https://www.kaggle.com/agileteam/tip-guide
- ๊ตฌ๋ฆ ํ๊ฒฝ์์ ์์
ํ1 ์ค์ ์ฐ์ตํ๊ธฐ (์ธ๋ถ๋ฐ์ดํฐ)
- ํ์ด์ฌ https://www.kaggle.com/agileteam/tip-how-to-use-ide
- R : https://www.kaggle.com/limmyoungjin/tip-how-to-use-ide
- ํ๋ค์ค ํต๊ณ ํจ์ ๊ฐ๋จ ์ ๋ฆฌ https://www.kaggle.com/agileteam/pandas-statistical-function
- json, xml ํ์ผ ๋ก๋ https://www.kaggle.com/agileteam/tip-data-load-json-and-xml
##๐ Code๐
- ํ์ฉ๋ฐฉ๋ฒ : ๋
ธํธ๋ถ(์ฝ๋) ํด๋ฆญ ํ ์ฐ์ธก ์๋จ์ **'copy & edit'** ํ๋ฉด ์ฌ์ฉํ ๋ฐ์ดํฐ ์
๊ณผ ํจ๊ป ๋
ธํธ๋ถ์ด ์ด๋ ค์!!
- ์์ ๋ฌธ์ ๋ฐ ๊ธฐ์ถ ์ ํ Tutorial
- ๋ชจ์๋ฌธ์ ์ถ์ ๋ฐ ํ์ด ("kim tae heon" ๊ฒ์)
- ์์
ํ1 : 'T1' ์ ๊ฒ์ํด์ฃผ์ธ์!
- ์์
ํ2 : 'T2'๋ฅผ ๊ฒ์ํด์ฃผ์ธ์!
## ๐ฆ ์ค๊ธฐ ์ค๋น ์คํฐ๋ (์ค์ง์ด ๊ฒ์) ๐ฆ
- ์ํ 5์ฃผ ์ ๋ฉค๋ฒ ๋ชจ์ง
- ์ํ 4์ฃผ ์ ๋ถํฐ ์ง์ค ํ์ต
## ๐ข ๊ธฐ์ด ํ์ต ์๋ฃ
### ํ์ด์ฌ, ํ๋ค์ค, ๋จธ์ ๋ฌ๋ / ํด๊ทผํ๋ด์ง
- ์ํ ํฉ๊ฒฉ์ฉ ์์ฑ ๊ธฐ์ด ๊ฐ์(์ ๋ฃ): https://inf.run/XnzT
### ๐ํ์ด์ฌ / ํ
๋๋
ธํธ
- ํ์ด์ฌ ์
๋ฌธ ๊ฐ์(๋ฌด๋ฃ) : https://youtu.be/dpwTOQri42s
- ํ์ด์ฌ ์ ์์ฑ
(๋ฌด๋ฃ) : https://wikidocs.net/book/6708
### ๐ํ๋ค์ค / ํ
๋๋
ธํธ
- ํ๋ค์ค ์
๋ฌธ๊ฐ์(์ ๋ฃ) : https://www.udemy.com/course/pandas-i/
- ํ๋ค์ค ์ ์์ฑ
(๋ฌด๋ฃ) : https://wikidocs.net/book/4639
ํจ๊ป ๊ณต๋ถํ๋ฉฐ ์ฑ์ฅํ์ผ๋ฉด ํด์!!!:) ์ด ์๋ฃ๊ฐ ๋์์ด ๋์๋ค๋ฉด upvote ํด๋ฆญ ๋ถํ๋๋ฆฝ๋๋ค ๐
### [์๋ด]
- ๋งํฌ๊ฐ ์๋ ๋ณต์ฌ๋ก ๋์ ์์ด ์ฌ์ฉ ๊ธ์ง
- ๋ณธ ์๋ฃ์ ๋ํ ํ๊ฐ๋์ง ์์ ๋ฐฐํฌ ๊ธ์ง
|
Big Data Certification KR
|
ํด๊ทผํ๋ด์ง ์ ๋น
๋ฐ์ดํฐ ๋ถ์๊ธฐ์ฌ ์ค๊ธฐ (Python, R tutorial code) ์ปค๋ฎค๋ํฐ
|
Attribution-NoDerivatives 4.0 International (CC BY-ND 4.0)
| 26
| 26
| null | 0
| 0.04602
| 0.025146
| 0.025197
| 0.195529
| 0.072973
| 0.070433
| 99
|
15,171
| 4,247
| 1,301,025
| 1,301,025
| null | 6,570
| 6,570
| 9,871
|
Dataset
|
11/09/2017 07:34:35
|
02/05/2018
| 640,998
| 190,808
| 347
| 198
| 3
|
09/29/2021
| 4,247
| null |
Iris.csv
| null |
CC0: Public Domain
| 1
| 1
| null | 0
| 0.053022
| 0.006337
| 0.191742
| 0.026036
| 0.069284
| 0.066989
| 100
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.