Freelancer -logo Kuinka se toimii Selaa töitä Kirjaudu sisään Rekisteröidy Ilmoita projekti
LÖYDÄ
Big Data Sales Hadoop Java Machine Learning (ML) Python
Profile cover photo
Seuraat nyt käyttäjää .
Virhe seurattaessa käyttäjää.
Tämä käyttäjä ei salli käyttäjien seurata häntä.
Seuraat jo tätä käyttäjää.
Jäsenyystasosi mahdollistaa vain 0 seurausta. Päivitä tästä.
Poistit käyttäjän seurannan onnistuneesti.
Virhe poistettaessa käyttäjän seurantaa.
Olet onnistuneesti suositellut käyttäjää
Virhe suositeltaessa käyttäjää.
Jokin meni vikaan. Päivitä sivu ja yritä uudelleen.
Sähköposti vahvistettu onnistuneesti.
Käyttäjän avatar
$35 USD / tunti
Maan INDIA lippu
saharanpur, india
$35 USD / tunti
Kello on tällä hetkellä 5:54 ap. täällä
Liittynyt marraskuuta 4, 2012
3 Suosittelee

Mohd T.

@tausy

4,9 (110 arvostelua)
6,4
6,4
$35 USD / tunti
Maan INDIA lippu
saharanpur, india
$35 USD / tunti
96 %
Suoritetut työt
83 %
Budjetin mukaisesti
95 %
Aikataulussa
17 %
Uudelleenpalkkausaste

ML, AI, Data Science, Python, Hadoop, Databases

- Data Scientist with over 7 years of industry experience, knowledge and understanding of Machine Learning, Data Analysis, Big Data/Hadoop, ETL, and Databases. - I hold a Master's degree in Data Science from Trinity College Dublin and a Bachelor's degree in Computer Science. - Currently, working as a data scientist with one of the world's largest banking and financial firm. - Solid understanding and expertise in analyzing and maintaining large datasets. - Honed my skills in Data Ingestion, Data Analysis, Data Migration, Data Consolidation, Data Processing, Data Visualization, and Data Mining. - In my 7 years of career, I worked primarily on Predictive Modeling, Machine Learning, and Hadoop to deliver cutting-edge predictive models in Healthcare, Aviation, and Financial sectors. - Extensive experience in building machine learning applications using Python and its ML stack libraries including NumPy, Pandas, Scikit-Learn, Matplotlib, etc. - Development and implementation experience of building data analytics pipelines and ML systems using PySpark on big data. - Worked extensively on Big Data and Hadoop stack tools including but not limited to Sqoop, Flume, Oozie, Hive, Impala, HDFS, and Map Reduce. - Worked on numerous projects of SQL, PL/SQL, ETL, Informatica, SSIS, and Informatica DIH for years. - Proficient in Java, and Python programming languages. Also, work on R statistical language. - Current areas of interest include Data Science, Data Analytics, Machine Learning, Predictive Modeling, Knowledge Discovery from Databases(KDD), Data Mining, Web Mining, and Information Retrieval.
Freelancer Python Developers India

Ota yhteyttä käyttäjään Mohd T. työhösi liittyen

Kirjaudu sisään keskustellaksesi yksityiskohdista chatin välityksellä.

Portfoliokohteet

Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
This project utilised the power of Deep Neural Networks in order to classify the images of animals. Our approach uses transfer learning, where pre-trained CNN architectures are used and further trained on the specific animal images in order to predict the animal from a given image. 

The VGG-19 architecture was used as a base model, which was then further trained on the animal images to detect and classify the animals. We used several data augmentation techniques like scale, shear, flip, etc. to develop the model and trained it for 200 epochs. Our model accurately predicted the animals with over 91% accuracy for the training set and 78% accuracy for the validation set. Our testing accuracy turned out to be 84% approximately.
Animals Image Classification Using Deep/Transfer Learning
This project utilised the power of Deep Neural Networks in order to classify the images of animals. Our approach uses transfer learning, where pre-trained CNN architectures are used and further trained on the specific animal images in order to predict the animal from a given image. 

The VGG-19 architecture was used as a base model, which was then further trained on the animal images to detect and classify the animals. We used several data augmentation techniques like scale, shear, flip, etc. to develop the model and trained it for 200 epochs. Our model accurately predicted the animals with over 91% accuracy for the training set and 78% accuracy for the validation set. Our testing accuracy turned out to be 84% approximately.
Animals Image Classification Using Deep/Transfer Learning
This project utilised the power of Deep Neural Networks in order to classify the images of animals. Our approach uses transfer learning, where pre-trained CNN architectures are used and further trained on the specific animal images in order to predict the animal from a given image. 

The VGG-19 architecture was used as a base model, which was then further trained on the animal images to detect and classify the animals. We used several data augmentation techniques like scale, shear, flip, etc. to develop the model and trained it for 200 epochs. Our model accurately predicted the animals with over 91% accuracy for the training set and 78% accuracy for the validation set. Our testing accuracy turned out to be 84% approximately.
Animals Image Classification Using Deep/Transfer Learning
This project utilised the power of Deep Neural Networks in order to classify the images of animals. Our approach uses transfer learning, where pre-trained CNN architectures are used and further trained on the specific animal images in order to predict the animal from a given image. 

The VGG-19 architecture was used as a base model, which was then further trained on the animal images to detect and classify the animals. We used several data augmentation techniques like scale, shear, flip, etc. to develop the model and trained it for 200 epochs. Our model accurately predicted the animals with over 91% accuracy for the training set and 78% accuracy for the validation set. Our testing accuracy turned out to be 84% approximately.
Animals Image Classification Using Deep/Transfer Learning
The goal of this project is to create a system that can detect whether or not someone is wearing a mask. CCTV cameras are used to record images or real-time video footage. Facial features are retrieved from the images or video footage and utilized to identify the mask on the face. We are attempting to detect the face mask using the capabilities of convolutional neural networks in this application. We're also trying to count the number of people who wear a proper face-covering against those who don't.
Mask Detection/Real-time Human Counting with Deep Learning
The goal of this project is to create a system that can detect whether or not someone is wearing a mask. CCTV cameras are used to record images or real-time video footage. Facial features are retrieved from the images or video footage and utilized to identify the mask on the face. We are attempting to detect the face mask using the capabilities of convolutional neural networks in this application. We're also trying to count the number of people who wear a proper face-covering against those who don't.
Mask Detection/Real-time Human Counting with Deep Learning
The goal of this project is to create a system that can detect whether or not someone is wearing a mask. CCTV cameras are used to record images or real-time video footage. Facial features are retrieved from the images or video footage and utilized to identify the mask on the face. We are attempting to detect the face mask using the capabilities of convolutional neural networks in this application. We're also trying to count the number of people who wear a proper face-covering against those who don't.
Mask Detection/Real-time Human Counting with Deep Learning

Arvostelut

Muutokset tallennettu
Näytetään 1 - 5 / 50+ arvostelua
Suodata arvosteluja: 5,0
₹16 000,00 INR
He submitted the project within the provided time frame!
Python Machine Learning (ML)
S
Maan  lippu Sri D. @srideepthisd3
3 kuukautta sitten
5,0
$50,00 USD
Great and fast work; communication with him is quick and convenient. I am happy to work with him.
Python NLP
I
Maan  lippu N A. @inourah14
4 kuukautta sitten
4,8
₹20 000,00 INR
It was really nice to work with Mohd. T. I would love to recommend this coder for all your relevant projects. Looking forward to work with him on my future projects.
Python Data Processing Excel Microsoft Access MySQL
Käyttäjän avatar
Maan  lippu Siraj M. @sirajmultani
8 kuukautta sitten
5,0
$120,00 USD
The works are impeccable. Delivered on time and comply with everything requested. One of the best freelancer on this site.
Java Python Machine Learning (ML) Big Data Sales
+1 enemmän
Käyttäjän avatar
Maan  lippu F. O. @fortizclavijo
9 kuukautta sitten
5,0
$350,00 SGD
Hired him to work on one of my projects. He was able to deliver the project proposal, project poster, artefact and report ahead of time. Guided me all the way when setting up the environment and running the program. Friendly approach made it easier to deal with him.
Python Software Architecture Report Writing Machine Learning (ML) Statistical Analysis
Käyttäjän avatar
Maan  lippu Albin V. @albinvarghese
11 kuukautta sitten

Kokemus

Data Scientist

Citibank Europe
jouluk. 2019 - Voimassa
Working as a data scientist in AI/ML team.

Hadoop/Machine Learning Developer

Opera Solutions
syysk. 2017 - Voimassa
Working on Hadoop Ecosystem in combination with python/machine learning to deliver predictive models.

Hadoop Developer

Tata Global Delivery Center SA, Montevideo, Uruguay, SA
huhtik. 2016 - elok. 2017 (1 , 4 kuukautta)
Worked on Hadoop ecosystem to deliver cutting edge predictive models using Sqoop, Flume, Oozie, Hive, Map Reduce

Koulutus

MSc Data Science

Trinity College, Dublin, Ireland 2018 - 2019
(1 )

Bachelor Of Technology (Computer Engineering)

Jamia Millia Islamia, India 2009 - 2013
(4 vuotta)

Pätevyydet

Certificate in Healthcare

Tata Business Domain Academy
2014

Oracle Database Certified SQL Expert

Oracle University
2015
SQL proficiency test certificate provided by Oracle

Oracle Database Certified PL/SQL Expert

Oracle University
2015
Pl/SQL proficiency test certificate provided by Oracle

Ota yhteyttä käyttäjään Mohd T. työhösi liittyen

Kirjaudu sisään keskustellaksesi yksityiskohdista chatin välityksellä.

Varmennukset

Suositeltu freelanceri
Henkilöllisyys varmennettu
Maksutapa varmennettu
Puhelinnumero varmennettu
Sähköpostiosoite varmennettu
Facebook yhdistetty

Todistukset

preferredfreelancer-1.png Preferred Freelancer Program SLA 1 92% SQL_1.png SQL 1 90% java_1.png Java 1 87% SQL_2.png SQL 2 85% python-1.png Python 1 80%

Parhaat taidot

Python 76 Java 59 Big Data Sales 45 Hadoop 41 Machine Learning (ML) 17

Selaa vastaavia freelancereita

Python Developers in India
Python Developers
Java Developers
Big Data Salespeople

Selaa vastaavia näyteikkunoita

Python
Java
Big Data Sales
Hadoop
Edellinen käyttäjä
Seuraava käyttäjä
Kutsu lähetetty onnistuneesti!
Rekisteröitynyttä käyttäjää Ilmoitettua työtä yhteensä
Freelancer ® is a registered Trademark of Freelancer Technology Pty Limited (ACN 142 189 759)
Copyright © 2023 Freelancer Technology Pty Limited (ACN 142 189 759)
Ladataan esikatselua
Lupa myönnetty Geolocation.
Kirjautumisistuntosi on vanhentunut ja sinut on kirjattu ulos. Kirjaudu uudelleen sisään.